An H∞ control approach to robust learning of feedforward neural networks

A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method.

[1]  Jang-Hyun Park,et al.  Adaptive Neural Control for Strict-Feedback Nonlinear Systems Without Backstepping , 2009, IEEE Transactions on Neural Networks.

[2]  C. Charalambous,et al.  Conjugate gradient algorithm for efficient training of artifi-cial neural networks , 1990 .

[3]  Yuechao Wang,et al.  An LMI approach to stability of systems with severe time-delay , 2004, IEEE Transactions on Automatic Control.

[4]  Dilip Sarkar,et al.  Methods to speed up error back-propagation learning algorithm , 1995, CSUR.

[5]  Hideaki Sakai,et al.  A real-time learning algorithm for a multilayered neural network based on the extended Kalman filter , 1992, IEEE Trans. Signal Process..

[6]  Stanislaw Osowski,et al.  Fast Second Order Learning Algorithm for Feedforward Multilayer Neural Networks and its Applications , 1996, Neural Networks.

[7]  P. S. Sastry,et al.  Analysis of the back-propagation algorithm with momentum , 1994, IEEE Trans. Neural Networks.

[8]  Jooyoung Park,et al.  Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.

[9]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[10]  Laxmidhar Behera,et al.  On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks , 2006, IEEE Transactions on Neural Networks.

[11]  Martin T. Hagan,et al.  Neural network design , 1995 .

[12]  Hugang Han,et al.  Adaptive control of a class of nonlinear systems with nonlinearly parameterized fuzzy approximators , 2001, IEEE Trans. Fuzzy Syst..

[13]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[14]  Stefen Hui,et al.  Application of feedforward neural networks to dynamical system identification and control , 1993, IEEE Trans. Control. Syst. Technol..

[15]  Wei Wu,et al.  Convergence of BP Algorithm with Variable Learning Rates for FNN Training , 2006, 2006 Fifth Mexican International Conference on Artificial Intelligence.

[16]  Leszek Rutkowski,et al.  A fast training algorithm for neural networks , 1998 .

[17]  Zhihong Man,et al.  A New Adaptive Backpropagation Algorithm Based on Lyapunov Stability Theory for Neural Networks , 2006, IEEE Transactions on Neural Networks.

[18]  José B. Galván A new adaptive scheme for the backpropagation algorithm , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[19]  Miguel Pinzolas,et al.  Neighborhood based Levenberg-Marquardt algorithm for neural network training , 2002, IEEE Trans. Neural Networks.

[20]  Kiyoshi Nishiyama,et al.  H∞-learning of layered neural networks , 2001, IEEE Trans. Neural Networks.

[21]  Stephen A. Billings,et al.  Model structure selection using an integrated forward orthogonal search algorithm assisted by squared correlation and mutual information , 2008, Int. J. Model. Identif. Control..