Convergence properties of cascade correlation in function approximation

Incremental neural networks have received increasing interest in the neural computing field, especially in reducing training time. Among various emerging algorithms, cascade correlation has become widely used. This algorithm gives satisfactory results in many applications: the reason for this, however, is still an open problem. In this paper, we prove a theorem which guarantees that the cascade correlation algorithm converges. Moreover, we prove that it has at least a speed of convergence of order O(1/nh), where nh is the number of hidden neurons, when approximating a function consisting of a series of sigmoids with a finite number of terms. This guarantees that, in applications where the well- known backpropagation gives a good representation of the training data, cascade correlation is able to obtain very similar results, saving a lot of computer time, as experienced in practice. Computer simulation shows the capability of the cascade correlation algorithm implemented to obtain this convergence speed.