Terminal attractor learning algorithms for back propagation neural networks

Novel learning algorithms called terminal attractor backpropagation (TABP) and heuristic terminal attractor backpropagation (HTABP) for multilayer networks are proposed. The algorithms are based on the concepts of terminal attractors, which are fixed points in the dynamic system violating Lipschitz conditions. The key concept in the proposed algorithms is the introduction of time-varying gains in the weight update law. The proposed algorithms preserve the parallel and distributed features of neurocomputing, guarantee that the learning process can converge in finite time, and find the set of weights minimizing the error function in global, provided such a set of weights exists. Simulations are carried out to demonstrate the global optimization properties and the superiority of the proposed algorithms over the standard backpropagation algorithm.<<ETX>>

[1]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[2]  Norio Baba,et al.  A new approach for finding the global minimum of error function of neural networks , 1989, Neural Networks.

[3]  Michail Zak,et al.  Terminal attractors in neural networks , 1989, Neural Networks.

[4]  R. H. White The learning rate in back-propagation systems: an application of Newton's method , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[5]  Clark C. Guest,et al.  Linear discriminants, logic functions, backpropagation, and improved convergence , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[6]  Clark C. Guest,et al.  High order neural networks with reduced numbers of interconnection weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[7]  A. Owens,et al.  Efficient training of the backpropagation network by solving a system of stiff ordinary differential equations , 1989, International 1989 Joint Conference on Neural Networks.

[8]  Pietro Burrascano,et al.  Smoothing backpropagation cost function by delta constraining , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[9]  Bernard Widrow,et al.  Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[10]  Sandeep Gulati,et al.  Neutral learning of constrained nonlinear transformations , 1989, Computer.

[11]  J. Song,et al.  Learning with hidden targets , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[12]  P. Burrascano,et al.  A learning rule in the Chebyshev norm for multilayer perceptrons , 1990 .