Improved learning of multiple continuous trajectories with initial network state

This study addresses a problem of learning multiple continuous trajectories by means of recurrent neural networks with (in general) time-varying weights. The learning task is transformed into an optimal control problem where both the weights and initial network state to be found are treated as controls. Based on a variational formulation of Pontryagin's maximum principle, a new learning algorithm is proposed which generalizes the one given given previously (1999). Under reasonable assumptions, its convergence is also discussed. A numerical example of learning a two-class problem is presented which demonstrates the efficiency of the approach proposed.

[1]  Jacques Ludik,et al.  A Multilayer Real-Time Recurrent Learning Algorithm for Improved Convergence , 1997, ICANN.

[2]  Marios M. Polycarpou,et al.  High-order neural network structures for identification of dynamical systems , 1995, IEEE Trans. Neural Networks.

[3]  Yuichi Nakamura,et al.  Approximation of dynamical systems by continuous time recurrent neural networks , 1993, Neural Networks.

[4]  Pierre Baldi,et al.  Gradient descent learning algorithm overview: a general dynamical systems perspective , 1995, IEEE Trans. Neural Networks.

[5]  Kwang Y. Lee,et al.  Diagonal recurrent neural networks for dynamic systems control , 1995, IEEE Trans. Neural Networks.

[6]  Visakan Kadirkamanathan,et al.  Dynamic structure neural networks for stable adaptive control of nonlinear systems , 1996, IEEE Trans. Neural Networks.

[7]  Amir F. Atiya,et al.  Application of the recurrent multilayer perceptron in modeling complex process dynamics , 1994, IEEE Trans. Neural Networks.

[8]  A. PearlmutterB. Gradient calculations for dynamic recurrent neural networks , 1995 .

[9]  Miroslaw Galicki,et al.  The Planning of Robotic Optimal Motions in the Presence of Obstacles , 1998, Int. J. Robotics Res..

[10]  Herbert Witte,et al.  Learning continuous trajectories in recurrent neural networks with time-dependent weights , 1999, IEEE Trans. Neural Networks.

[11]  Herbert Witte,et al.  Training Continuous Trajectories by Means of Dynamic Neural Networks with Time Dependent Weights , 1998, NC.

[12]  Barak A. Pearlmutter Gradient calculations for dynamic recurrent neural networks: a survey , 1995, IEEE Trans. Neural Networks.

[13]  Emanuel Marom,et al.  Efficient Training of Recurrent Neural Network with Time Delays , 1997, Neural Networks.