A New Learning Algorithm for Recurrent Networks

Abstract The most popular methods for modifying feedforward and recurrent neural networks’ weights, accordingly the backpropagation methods, actually are gradient methods. Due to die fact that the gradient is a local measure, the step towards the gradient’s direction, which step is given by the learning rate p , must be infinitesimal, which implies the choosing of a very small p . But this leads to a very slow convergence. Hence, still a large p is chosen. On the other hand, a too large p leads to strong oscillations of the aim function. Moreover, the values of p are relative to the problem to be solved.