Training algorithms for backpropagation neural networks with optimal descent factor
暂无分享,去创建一个
Poor convergence of existing training algorithms prevents wide applications of backpropagation neural networks. Several new training algorithms with very fast convergence are presented. They all use derivative information to efficiently estimate the optimal descent factors, thus providing the fastest descent of the mean squared error in the descent directions that characterise the algorithms. Simulation results are illustrated.
[1] Hecht-Nielsen. Theory of the backpropagation neural network , 1989 .
[2] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[3] W. Murray. Numerical Methods for Unconstrained Optimization , 1975 .
[4] Unconstrained Optimization Methods , 1978 .