New dynamical optimal learning for linear multilayer FNN

This letter presents a new dynamical optimal learning (DOL) algorithm for three-layer linear neural networks and investigates its generalization ability. The optimal learning rates can be fully determined during the training process. The mean squared error (mse) is guaranteed to be stably decreased and the learning is less sensitive to initial parameter settings. The simulation results illustrate that the proposed DOL algorithm gives better generalization performance and faster convergence as compared to standard error back propagation algorithm.

[1]  C. A. Desoer,et al.  Nonlinear Systems Analysis , 1978 .

[2]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[3]  Nazif Tepedelenlioglu,et al.  A fast new algorithm for training feedforward neural networks , 1992, IEEE Trans. Signal Process..

[4]  Jacek M. Zurada,et al.  Introduction to artificial neural systems , 1992 .

[5]  V. Rao Vemuri,et al.  Artificial neural networks: Concepts and control applications , 1992 .

[6]  M. Vidyasagar,et al.  Nonlinear systems analysis (2nd ed.) , 1993 .

[7]  S. Ergezinger,et al.  An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer , 1995, IEEE Trans. Neural Networks.

[8]  J. Scherk Algebra: A Computational Introduction , 2000 .

[9]  Chin-Teng Lin,et al.  Dynamic optimal learning rates of a certain class of fuzzy neural networks and its applications with genetic algorithm , 2001, IEEE Trans. Syst. Man Cybern. Part B.

[10]  Mohamed Najim,et al.  A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm , 2001, IEEE Trans. Neural Networks.

[11]  Tommy W. S. Chow,et al.  Feedforward networks training speed enhancement by optimal initialization of the synaptic coefficients , 2001, IEEE Trans. Neural Networks.

[12]  Marios M. Polycarpou,et al.  Using localizing learning to improve supervised learning algorithms , 2001, IEEE Trans. Neural Networks.

[13]  Daniel S. Yeung,et al.  Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function , 2002, IEEE Trans. Neural Networks.

[14]  Xinghuo Yu,et al.  A general backpropagation algorithm for feedforward neural networks learning , 2002, IEEE Trans. Neural Networks.

[15]  Stamatis Vassiliadis,et al.  On Chaos and Neural Networks: The Backpropagation Paradigm , 2001, Artificial Intelligence Review.