Globally Convergent Modification of the Quickprop Method

A mathematical framework for the convergence analysis of the well-known Quickprop method is described. Furthermore, we propose a modification of this method that exhibits improved convergence speed and stability, and, at the same time, alleviates the use of heuristic learning parameters. Simulations are conducted to compare and evaluate the performance of the new modified Quickprop algorithm with various popular training algorithms. The results of the experiments indicate that the increased convergence rates achieved by the proposed algorithm, affect by no means its generalization capability and stability.

[1]  George D. Magoulas,et al.  A new method in neural network supervised training with imprecision , 1996, Proceedings of Third International Conference on Electronics, Circuits, and Systems.

[2]  Jorge Nocedal,et al.  Theory of algorithms for unconstrained optimization , 1992, Acta Numerica.

[3]  Arjen van Ooyen,et al.  Improving the convergence of the back-propagation algorithm , 1992, Neural Networks.

[4]  A. K. Rigler,et al.  Accelerating the convergence of the back-propagation method , 1988, Biological Cybernetics.

[5]  Elijah Polak,et al.  Optimization: Algorithms and Consistent Approximations , 1997 .

[6]  Robert M. Haralick,et al.  Textural Features for Image Classification , 1973, IEEE Trans. Syst. Man Cybern..

[7]  Sang-Hoon Oh,et al.  An analysis of premature saturation in back propagation learning , 1993, Neural Networks.

[8]  Thomas P. Vogl,et al.  Rescaling of variables in back propagation learning , 1991, Neural Networks.

[9]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[10]  John E. Dennis,et al.  Numerical methods for unconstrained optimization and nonlinear equations , 1983, Prentice Hall series in computational mathematics.

[11]  C. G. Broyden A Class of Methods for Solving Nonlinear Simultaneous Equations , 1965 .

[12]  Robert A. Jacobs,et al.  Increased rates of convergence through learning rate adaptation , 1987, Neural Networks.

[13]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[14]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[15]  Alessandro Sperduti,et al.  Speed up learning and network optimization with extended back propagation , 1993, Neural Networks.

[16]  P. Wolfe Convergence Conditions for Ascent Methods. II , 1969 .

[17]  Phil Brodatz,et al.  Textures: A Photographic Album for Artists and Designers , 1966 .

[18]  R. Schnabel,et al.  A view of unconstrained optimization , 1989 .

[19]  P. Wolfe Convergence Conditions for Ascent Methods. II: Some Corrections , 1971 .

[20]  Jorge Nocedal,et al.  Global Convergence Properties of Conjugate Gradient Methods for Optimization , 1992, SIAM J. Optim..

[21]  J. J. Moré,et al.  A Characterization of Superlinear Convergence and its Application to Quasi-Newton Methods , 1973 .

[22]  George D. Magoulas,et al.  Effective Backpropagation Training with Variable Stepsize , 1997, Neural Networks.