Identification of nonlinear systems using new dynamic neural network structures

The authors study the stability and convergence properties of recurrent high-order neural networks (RHONNs) as models of nonlinear dynamical systems. The overall structure of the RHONN consists of dynamical elements distributed throughout the network in the form of dynamical neurons, which are interconnected by high-order connections. It is shown that if a sufficiently large number of high-order connections between neurons is allowed, the RHONN model is capable of approximating the input-output behavior of general dynamical systems to any degree of accuracy. Based on the linear-in-the-weights property of the RHONN model, the authors have developed identification schemes and derived weight adaptive laws for adjustment of the weights. The convergence and stability properties of these weight adaptive laws have been analyzed. In the case of no modeling error, the state error between the system and the RHONN model converges to zero asymptotically. If modeling errors are present, the sigma -modification is proposed as a method of guaranteeing the stability of the overall scheme. The feasibility of applying these techniques has been demonstrated by considering the identification of a simple rigid robotic system.<<ETX>>

[1]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[2]  Manolis A. Christodoulou,et al.  Robot identification using dynamical neural networks , 1991, [1991] Proceedings of the 30th IEEE Conference on Decision and Control.

[3]  Pierre Baldi,et al.  Neural networks, orientations of the hypercube, and algebraic threshold functions , 1988, IEEE Trans. Inf. Theory.

[4]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[5]  John J. Craig,et al.  Introduction to Robotics Mechanics and Control , 1986 .

[6]  Petros A. Ioannou,et al.  Robust adaptive control: a unified approach , 1991 .

[7]  P J Webros BACKPROPAGATION THROUGH TIME: WHAT IT DOES AND HOW TO DO IT , 1990 .

[8]  M.R. Azimi-Sadjadi,et al.  Detection of dim targets in high cluttered background using high order correlation neural network , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[9]  Kumpati S. Narendra,et al.  Gradient methods for the optimization of dynamical systems containing neural networks , 1991, IEEE Trans. Neural Networks.

[10]  Phillip J. McKerrow,et al.  Introduction to robotics , 1991 .

[11]  A. Dembo,et al.  High-order absolutely stable neural networks , 1991 .

[12]  Pineda,et al.  Generalization of back-propagation to recurrent neural networks. , 1987, Physical review letters.

[13]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[14]  C. L. Giles,et al.  Second-order recurrent neural networks for grammatical inference , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[16]  Kumpati S. Narendra,et al.  Identification and control of dynamical systems using neural networks , 1990, IEEE Trans. Neural Networks.

[17]  R. Sanner,et al.  Gaussian Networks for Direct Adaptive Control , 1991 .

[18]  Neil E. Cotter,et al.  The Stone-Weierstrass theorem and its application to neural networks , 1990, IEEE Trans. Neural Networks.

[19]  Graham C. Goodwin,et al.  Adaptive filtering prediction and control , 1984 .

[20]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.