A New Learning Algorithm for Recurrent Networks
暂无分享,去创建一个
Abstract The most popular methods for modifying feedforward and recurrent neural networks’ weights, accordingly the backpropagation methods, actually are gradient methods. Due to die fact that the gradient is a local measure, the step towards the gradient’s direction, which step is given by the learning rate p , must be infinitesimal, which implies the choosing of a very small p . But this leads to a very slow convergence. Hence, still a large p is chosen. On the other hand, a too large p leads to strong oscillations of the aim function. Moreover, the values of p are relative to the problem to be solved.
[1] Kumpati S. Narendra,et al. Identification and control of dynamical systems using neural networks , 1990, IEEE Trans. Neural Networks.
[2] Fernando J. Pineda,et al. Recurrent Backpropagation and the Dynamical Approach to Adaptive Neural Computation , 1989, Neural Computation.
[3] Don R. Hush,et al. The recursive neural network and its applications in control theory , 1993 .
[4] Ronald J. Williams,et al. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.