A perfect example for the BFGS method

Consider the BFGS quasi-Newton method applied to a general non-convex function that has continuous second derivatives. This paper aims to construct a four-dimensional example such that the BFGS method need not converge. The example is perfect in the following sense: (a) All the stepsizes are exactly equal to one; the unit stepsize can also be accepted by various line searches including the Wolfe line search and the Arjimo line search; (b) The objective function is strongly convex along each search direction although it is not in itself. The unit stepsize is the unique minimizer of each line search function. Hence the example also applies to the global line search and the line search that always picks the first local minimizer; (c) The objective function is polynomial and hence is infinitely continuously differentiable. If relaxing the convexity requirement of the line search function; namely, (b) we are able to construct a relatively simple polynomial example.

[1]  Jean B. Lasserre,et al.  Global Optimization with Polynomials and the Problem of Moments , 2000, SIAM J. Optim..

[2]  Yu-Hong Dai,et al.  Convergence Properties of the BFGS Algoritm , 2002, SIAM J. Optim..

[3]  C. G. Broyden The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations , 1970 .

[4]  D. Goldfarb A family of variable-metric methods derived by variational means , 1970 .

[5]  William C. Davidon,et al.  Variable Metric Method for Minimization , 1959, SIAM J. Optim..

[6]  Masao Fukushima,et al.  On the Global Convergence of the BFGS Method for Nonconvex Unconstrained Optimization Problems , 2000, SIAM J. Optim..

[7]  R. Fletcher,et al.  A New Approach to Variable Metric Algorithms , 1970, Comput. J..

[8]  D. Shanno Conditioning of Quasi-Newton Methods for Function Minimization , 1970 .

[9]  J. J. Moré,et al.  Quasi-Newton Methods, Motivation and Theory , 1974 .

[10]  M. Powell On the Convergence of the Variable Metric Algorithm , 1971 .

[11]  Roger Fletcher,et al.  An Overview of Unconstrained Optimization , 1994 .

[12]  M. J. D. Powell,et al.  On the convergence of the DFP algorithm for unconstrained optimization when there are only two variables , 2000, Math. Program..

[13]  Jorge Nocedal,et al.  Theory of algorithms for unconstrained optimization , 1992, Acta Numerica.

[14]  J. Gallier Quadratic Optimization Problems , 2020, Linear Algebra and Optimization with Applications to Machine Learning.

[15]  C. G. Broyden The Convergence of a Class of Double-rank Minimization Algorithms 2. The New Algorithm , 1970 .

[16]  J. Nocedal,et al.  Global Convergence of a Class of Quasi-newton Methods on Convex Problems, Siam Some Global Convergence Properties of a Variable Metric Algorithm for Minimization without Exact Line Searches, Nonlinear Programming, Edited , 1996 .

[17]  Walter F. Mascarenhas,et al.  The BFGS method with exact line searches fails for non-convex objective functions , 2004, Math. Program..

[18]  Ya-Xiang Yuan,et al.  Optimization Theory and Methods: Nonlinear Programming , 2010 .

[19]  Roger Fletcher,et al.  A Rapidly Convergent Descent Method for Minimization , 1963, Comput. J..

[20]  M. Powell Nonconvex minimization calculations and the conjugate gradient method , 1984 .