An integrated approach to improving back-propagation neural networks
暂无分享,去创建一个
Back-propagation is the most popular training method for multi-layer feed-forward neural networks. To date, most researchers aiming at improving back-propagation work at one or two aspects of back-propagation, though there are some researchers who tackle a few aspects of back-propagation at a time. This paper explores various ways of improving back-propagation and attempts to integrate them together to form the new-improved backpropagation. The aspects of back-propagation that are investigated are: net pruning during training, adaptive learning rates for individual weights and biases, adaptive momentum, and extending the role of the neuron in learning.<<ETX>>
[1] M. S. Bazaraa,et al. Nonlinear Programming , 1979 .
[2] Ehud D. Karnin,et al. A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.
[3] Raoul Tawel. Does the Neuron "Learn" Like the Synapse? , 1988, NIPS.