Adaptive training of multilayer neural networks using a least squares estimation technique

A technique is developed for the training of artificial neural networks, using a modification of the Marquardt-Levenberg optimization technique. An adaptive choice of the convergence rate factor mu , based on the contribution of each neuron in the minimization of the error function, is presented that can be very useful in handling the problem of local minima of the error function. The proposed algorithm is more powerful but also more elaborate than backpropagation. Moreover, it can be shown that in some applications its computational complexity can be made similar to that of backpropagation by using fast implementations of the least-squares method.<<ETX>>