Diagonal Approximation of the Hessian by Finite Differences for Unconstrained Optimization

A new quasi-Newton method with a diagonal updating matrix is suggested, where the diagonal elements are determined by forward or by central finite differences. The search direction is a direction of sufficient descent. The algorithm is equipped with an acceleration scheme. The convergence of the algorithm is linear. The preliminary computational experiments use a set of 75 unconstrained optimization test problems classified in five groups according to the structure of their Hessian: diagonal, block-diagonal, band (tri- or penta-diagonal), sparse and dense. Subject to the CPU time metric, intensive numerical experiments show that, for problems with Hessian in a diagonal, block-diagonal or band structure, the algorithm with diagonal approximation of the Hessian by finite differences is top performer versus the well-established algorithms: the steepest descent and the Broyden–Fletcher–Goldfarb–Shanno. On the other hand, as a by-product of this numerical study, we show that the Broyden–Fletcher–Goldfarb–Shanno algorithm is faster for problems with sparse Hessian, followed by problems with dense Hessian.

[1]  M. Zhu,et al.  The Quasi-Cauchy Relation and Diagonal Updating , 1999, SIAM J. Optim..

[2]  Claude Lemaréchal,et al.  Some numerical experiments with variable-storage quasi-Newton algorithms , 1989, Math. Program..

[3]  Siam Rfview,et al.  CONVERGENCE CONDITIONS FOR ASCENT METHODS , 2016 .

[4]  Michael Bartholomew-Biggs,et al.  Nonlinear Optimization with Engineering Applications , 2008 .

[5]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[6]  June Leong Wah,et al.  A new diagonal gradient-type method for large scale unconstrained optimization , 2013 .

[7]  Nicholas I. M. Gould,et al.  CUTE: constrained and unconstrained testing environment , 1995, TOMS.

[8]  Jorge Nocedal,et al.  On the limited memory BFGS method for large scale optimization , 1989, Math. Program..

[9]  P. Gill,et al.  Conjugate-Gradient Methods for Large-Scale Nonlinear Optimization. , 1979 .

[10]  P. Wolfe Convergence Conditions for Ascent Methods. II , 1969 .

[11]  J. Dennis,et al.  Sizing and least-change secant methods , 1993 .

[12]  H. Walker,et al.  Inaccuracy in quasi-Newton methods: Local improvement theorems , 1984 .

[13]  John E. Dennis,et al.  Numerical methods for unconstrained optimization and nonlinear equations , 1983, Prentice Hall series in computational mathematics.

[14]  Jorge Nocedal,et al.  Theory of algorithms for unconstrained optimization , 1992, Acta Numerica.

[15]  Neculai Andrei,et al.  Continuous Nonlinear Optimization for Engineering Applications in GAMS Technology , 2017 .

[16]  J. J. Moré,et al.  Quasi-Newton Methods, Motivation and Theory , 1974 .

[17]  T. Ypma The Effect of Rounding Errors on Newton-like Methods , 1983 .

[18]  Jorge J. Moré,et al.  Digital Object Identifier (DOI) 10.1007/s101070100263 , 2001 .

[19]  J. J. Moré,et al.  A Characterization of Superlinear Convergence and its Application to Quasi-Newton Methods , 1973 .

[20]  Philip E. Gill,et al.  Practical optimization , 1981 .

[21]  Neculai Andrei,et al.  Acceleration of conjugate gradient algorithms for unconstrained optimization , 2009, Appl. Math. Comput..

[22]  David G. Luenberger,et al.  Linear and nonlinear programming , 1984 .

[23]  Neculai Andrei A diagonal quasi-Newton updating method for unconstrained optimization , 2018, Numerical Algorithms.

[24]  M. Farid,et al.  Improved Hessian approximation with modified quasi-Cauchy relation for a gradient-type method , 2010 .

[25]  Jorge Nocedal,et al.  A Numerical Study of the Limited Memory BFGS Method and the Truncated-Newton Method for Large Scale Optimization , 1991, SIAM J. Optim..

[26]  Stephen J. Wright,et al.  Numerical Optimization , 2018, Fundamental Statistical Inference.

[27]  C. Kelley Iterative Methods for Linear and Nonlinear Equations , 1987 .

[28]  Neculai Andrei,et al.  An acceleration of gradient descent algorithm with backtracking for unconstrained optimization , 2006, Numerical Algorithms.

[29]  P. Wolfe Convergence Conditions for Ascent Methods. II: Some Corrections , 1971 .

[30]  S. Nash Preconditioning of Truncated-Newton Methods , 1985 .