Self-adaptive learning rates in backpropagation algorithm improve its function approximation performance

The backpropagation algorithm helps a multilayer perceptron to learn to map a set of inputs to a set of outputs. But often its function approximation performance is not impressive. In this paper the authors demonstrate that self-adaptation of the learning rate of the backpropagation algorithm helps in improving the approximation of a function. The modified backpropagation algorithm with self-adaptive learning rates is based on a combination of two updating rules-one for updating the connection weights and the other for updating the learning rate. The method for learning rate updating implements the gradient descent principle on the error surface. Simulation results with astrophysical data are presented.