An adaptive training algorithm for back-propagation neural networks

To improve the convergence speed of the backpropagation training algorithm, the authors have chosen a dynamic learning rate which is a weighted average of direction cosines of successive incremental weight vectors Delta W at the current and several previous iterations. These adjacent direction cosines reflect the local curvature of the error surface, along which an 'optimum' search for the minimum error is determined for the weight adjustment of the next iteration. The authors have tested this on a real problem of training a three-layer feedforward artificial neural network for REM (rapid eye movement) sleep stage recognition. The training performance was significantly improved in terms of both faster convergence and smaller error when the last three direction cosines were included in determining the dynamic learning rate.<<ETX>>

[1]  Robert J. Marks,et al.  An adaptively trained neural network , 1991, IEEE Trans. Neural Networks.

[2]  Sharad Singhal,et al.  Training feed-forward networks with the extended Kalman algorithm , 1989, International Conference on Acoustics, Speech, and Signal Processing,.

[3]  Richard P. Lippmann,et al.  An introduction to computing with neural nets , 1987 .

[4]  Martin G. Bello,et al.  Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks , 1992, IEEE Trans. Neural Networks.

[5]  Yann LeCun,et al.  Improving the convergence of back-propagation learning with second-order methods , 1989 .

[6]  Minoru Fukumi,et al.  A new back-propagation algorithm with coupled neuron , 1991, International 1989 Joint Conference on Neural Networks.

[7]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[8]  Francesco Palmieri,et al.  MEKA-a fast, local algorithm for training feedforward neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[9]  Richard P. Brent,et al.  Fast training algorithms for multilayer neural nets , 1991, IEEE Trans. Neural Networks.

[10]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[11]  Kurt Hornik,et al.  Convergence of learning algorithms with constant learning rates , 1991, IEEE Trans. Neural Networks.

[12]  Nazif Tepedelenlioglu,et al.  A fast new algorithm for training feedforward neural networks , 1992, IEEE Trans. Signal Process..