Time-Scaling in Recurrent Neural Learning

Recurrent Backpropagation schemes for fixed point learning in continuous-time dynamic neural networks can be formalized through a differential-algebraic model, which in turn leads to singularly perturbed training techniques. Such models clarify the relative time-scaling between the network evolution and the adaptation dynamics, and allow for rigorous local convergence proofs. The present contribution addresses some related issues in a discrete-time context: fixed point problems can be analyzed in terms of iterations with different evolution rates, whereas periodic trajectory learning can be reduced to a multiple fixed point learning problem via Poincare maps.

[1]  Liang Jin,et al.  Stable dynamic backpropagation learning in recurrent neural networks , 1999, IEEE Trans. Neural Networks.

[2]  E. Allgower,et al.  Numerical Continuation Methods , 1990 .

[3]  Pedro J. Zufiria,et al.  Rates of Learning in Gradient and Genetic Training of Recurrent Neural Networks , 1999, ICANNGA.

[4]  Pineda,et al.  Generalization of back-propagation to recurrent neural networks. , 1987, Physical review letters.

[5]  Eugene L. Allgower,et al.  Numerical continuation methods - an introduction , 1990, Springer series in computational mathematics.

[6]  Linda R. Petzold,et al.  Numerical solution of initial-value problems in differential-algebraic equations , 1996, Classics in applied mathematics.

[7]  Pierre Baldi,et al.  Gradient descent learning algorithm overview: a general dynamical systems perspective , 1995, IEEE Trans. Neural Networks.

[8]  Ali Saberi,et al.  Quadratic-type Lyapunov functions for singularly perturbed systems , 1981, 1981 20th IEEE Conference on Decision and Control including the Symposium on Adaptive Processes.

[10]  Ricardo Riaza,et al.  On singular equilibria of index-1 DAEs , 2000 .