An integrated approach to improving back-propagation neural networks

Back-propagation is the most popular training method for multi-layer feed-forward neural networks. To date, most researchers aiming at improving back-propagation work at one or two aspects of back-propagation, though there are some researchers who tackle a few aspects of back-propagation at a time. This paper explores various ways of improving back-propagation and attempts to integrate them together to form the new-improved backpropagation. The aspects of back-propagation that are investigated are: net pruning during training, adaptive learning rates for individual weights and biases, adaptive momentum, and extending the role of the neuron in learning.<<ETX>>