Training algorithms for backpropagation neural networks with optimal descent factor

Poor convergence of existing training algorithms prevents wide applications of backpropagation neural networks. Several new training algorithms with very fast convergence are presented. They all use derivative information to efficiently estimate the optimal descent factors, thus providing the fastest descent of the mean squared error in the descent directions that characterise the algorithms. Simulation results are illustrated.