Autonomous learning algorithm for fully connected recurrent networks

In this paper, fully connected RTRL neural networks are studied. In order to learn dynamical behaviours of continuous time processes or to predict numerical time series, an autonomous learning algorithm has been developed. The originality of this method consists in the gradient-based adaptation of the learning rate and time parameter of neurons using a small perturbations method. Starting from zero initial conditions (neural states, rate of learning, time parameter and matrix of weights) the evolution is completely driven by the dynamic of the learning data. Stability issues are discussed, and several examples are investigated in order to compare the performances of the adaptive learning rate and time parameter algorithm with the constant parameters one.

[1]  A. Weigend,et al.  Time Series Prediction: Forecasting the Future and Understanding the Past , 1994 .

[2]  Fabrice Druaux,et al.  Constrained RTRL To Reduce Learning Rate and Forgetting Phenomenon , 2004, Neural Processing Letters.

[3]  K. Doya,et al.  Bifurcations in the learning of recurrent neural networks , 1992, [Proceedings] 1992 IEEE International Symposium on Circuits and Systems.

[4]  Rong-Jong Wai,et al.  Hybrid controller using fuzzy neural networks for identification and control of induction servo motor drive , 2000, Neurocomputing.

[5]  Charles X. Ling,et al.  Overfitting and generalization in learning discrete patterns , 1995, Neurocomputing.

[6]  David Zipser,et al.  A Subgrouping Strategy that Reduces Complexity and Speeds Up Learning in Recurrent Networks , 1989, Neural Computation.

[7]  Jochen J. Steil Local stability of recurrent networks with time-varying weights and inputs , 2002, Neurocomputing.

[8]  Frank L. Lewis,et al.  Multilayer neural-net robot controller with guaranteed tracking performance , 1996, IEEE Trans. Neural Networks.

[9]  Yonghong Chen,et al.  Analyzing stability of equilibrium points in neural networks: a general approach , 2003, Neural Networks.

[10]  Klaus-Robert Müller,et al.  Asymptotic statistical theory of overtraining and cross-validation , 1997, IEEE Trans. Neural Networks.

[11]  Tianguang Chu,et al.  Necessary and sufficient condition for absolute stability of normal neural networks , 2003, Neural Networks.

[12]  Kenji Doya,et al.  Adaptive neural oscillator using continuous-time back-propagation learning , 1989, Neural Networks.

[13]  Gustavo Deco,et al.  Two Strategies to Avoid Overfitting in Feedforward Networks , 1997, Neural Networks.

[14]  A. N. Michel,et al.  Exponential stability and trajectory bounds of neural networks under structural variations , 1991 .

[15]  Hidemitsu Ogawa,et al.  Error correcting memorization learning for noisy training examples , 2001, Neural Networks.

[16]  Ronald J. Williams,et al.  Experimental Analysis of the Real-time Recurrent Learning Algorithm , 1989 .

[17]  Jian-Hui Jiang,et al.  Robust back propagation algorithm as a chemometric tool to prevent the overfitting to outliers , 1996 .

[18]  Kumpati S. Narendra,et al.  Identification and control of dynamical systems using neural networks , 1990, IEEE Trans. Neural Networks.

[19]  V. T. Sunil Elanayar,et al.  State estimation of continuous-time radial basis function networks , 2000, Autom..

[20]  Yukio Hayashi,et al.  Oscillatory neural network and learning of continuously transformed patterns , 1994, Neural Networks.

[21]  Jun Oh Jang,et al.  A parallel neuro-controller for DC motors containing nonlinear friction , 2000, Neurocomputing.

[22]  Ljubomir T. Grujic,et al.  Modeling and qualitative analysis of continuous-time neural networks under pure structural variations , 1996 .

[23]  H. Akaike A new look at the statistical model identification , 1974 .

[24]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[25]  Lyubomir T. Gruyitch Consistent Lyapunov methodology for Hopfield fuzzy neural networks , 1999, Neural Parallel Sci. Comput..

[26]  Man-Wai Mak,et al.  A conjugate gradient learning algorithm for recurrent neural networks , 1999, Neurocomputing.

[27]  Ronald J. Williams,et al.  Adaptive state representation and estimation using recurrent connectionist networks , 1990 .

[28]  Man-Wai Mak,et al.  On the improvement of the real time recurrent learning algorithm for recurrent neural networks , 1999, Neurocomputing.

[29]  Shun-ichi Amari,et al.  Network information criterion-determining the number of hidden units for an artificial neural network model , 1994, IEEE Trans. Neural Networks.

[30]  Dianhui Wang,et al.  Enhancing the estimation of plant Jacobian for adaptive neural inverse control , 2000, Neurocomputing.

[31]  Andreas S. Weigend,et al.  Time Series Prediction: Forecasting the Future and Understanding the Past , 1994 .