The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & et. al., Werbos) and the forward propagation (Williams and Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the on-line requirement in many practical applications. Although the forward propagation algorithm can be used in an on-line manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitivity matrix (O(N4) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task. In this paper we proposed a forward learning algorithm which is one order faster (only O(N3) operations for each time step) than the sensitivity matrix algorithm. The basic idea is that instead of integrating the high dimensional sensitivity dynamic equation we solve forward in time for its Green's function to avoid the redundant computations, and then update the weights whenever the error is to be corrected.
A Numerical example for classifying state trajectories using a recurrent network is presented. It substantiated the faster speed of the proposed algorithm than the Williams and Zipser's algorithm.
[1]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[2]
Pineda,et al.
Generalization of back-propagation to recurrent neural networks.
,
1987,
Physical review letters.
[3]
Jacob Barhen,et al.
Adjoint-Functions and Temporal Learning Algorithms in Neural Networks
,
1990,
NIPS.
[4]
J. Barhen,et al.
Application of adjoint operators to neural learning
,
1990
.
[5]
Ronald J. Williams,et al.
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
,
1989,
Neural Computation.
[6]
Barak A. Pearlmutter.
Learning State Space Trajectories in Recurrent Neural Networks
,
1989,
Neural Computation.