A Fast Learning Algorithm Based on Layered Hessian Approximations and the Pseudoinverse

In this article, we present a simple, effective method to learning for an MLP that is based on approximating the Hessian using only local information, specifically, the correlations of output activations from previous layers of hidden neurons. This approach of training the hidden layer weights with the Hessian approximation combined with the training of the final output layer of weights using the pseudoinverse [1] yields improved performance at a fraction of the computational and structural complexity of conventional learning algorithms.