Learning in neural networks by using tangent planes to constraint surfaces

Abstract The principal disadvantage of the back propagation gradient descent learning algorithm for multilayer feedforward neural networks is its relatively slow rate of convergence. An alternative method, adjusting weights by moving to the tangent planes to constraint surfaces, is shown to give significantly faster convergence whilst preserving the system of back propagating errors through the network.