An improvement to the natural gradient learning algorithm for multilayer perceptrons

Natural gradient learning has been shown to avoid singularities in the parameter space of multilayer perceptrons. However, it requires a large number of additional parameters beyond ordinary backpropagation. The article describes a new approach to natural gradient learning in which the number of parameters necessary is much smaller than in the natural gradient algorithm. This new method exploits the algebraic structure of the parameter space to reduce the space and time complexity of the algorithm and improve its performance.