Automatic Adaptation of Learning Rate for Backpropagation Neural Networks

A method improving the convergence rate of the backpropagation algorithm is proposed. This method adapts the learning rate using the Barzilai and Borwein [IMA J.Numer. Anal., 8, 141– 148, 1988] steplength update for gradient descent methods. The determined learning rate is different for each epoch and depends on the weights and gradient values of the previous one. Experimental results show that the proposed method considerably improves the convergence rates of the backpropagation algorithm and, for the chosen test problems, outperforms other well-known training methods.