Simple adaptive momentum: New algorithm for training multilayer perceptrons
暂无分享,去创建一个
The speed of convergence while training is an important consideration in the use of neural nets. The authors outline a new training algorithm which reduces both the number of iterations and training time required for convergence of multilayer perceptrons, compared to standard back-propagation and conjugate gradient descent algorithms.
[1] Martin Fodslette Møller,et al. A scaled conjugate gradient algorithm for fast supervised learning , 1993, Neural Networks.
[2] Roberto Battiti,et al. BFGS Optimization for Faster and Automated Supervised Learning , 1990 .