Novel Training Algorithm Based on Quadratic Optimisation Using Neural Networks

In this paper we present a novel algorithm for training feedforward neural networks based on the use of recurrent neural networks for bound constrained quadratic optimisation. Instead of trying to invert the Hessian matrix or its approximation, as done in other second-order algorithms, a recurrent equation that emulates a recurrent neural network determines the optimal weight update. The development of this algorithm is presented, along with its performance under ideal conditions as well as results from training multilayer perceptrons. The results show that the algorithm is capable of achieving results with less errors than other methods for a variety of problems.