Optimal Newton-type methods for nonconvex smooth optimization problems

We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is α−Holder continuous (for given α ∈ [0, 1]) on the path of the iterates, for which the method in question takes at least ⌊ǫ⌋ function-evaluations to generate a first iterate whose gradient is smaller than ǫ in norm. This provides a lower bound on the evaluation complexity of second-order methods in our class when applied to smooth problems satisfying our assumptions. Furthermore, for α = 1, this lower bound is of the same order in ǫ as the upper bound on the evaluation complexity of cubic regularization, thus implying cubic regularization has optimal worst-case evaluation complexity within our class of second-order methods.

[1]  S. Goldfeld,et al.  Maximization by Quadratic Hill-Climbing , 1966 .

[2]  John E. Dennis,et al.  Numerical methods for unconstrained optimization and nonlinear equations , 1983, Prentice Hall series in computational mathematics.

[3]  Stephen A. Vavasis,et al.  Black-Box Complexity of Local Minimization , 1993, SIAM J. Optim..

[4]  Nicholas I. M. Gould,et al.  Trust Region Methods , 2000, MOS-SIAM Series on Optimization.

[5]  D K Smith,et al.  Numerical Optimization , 2001, J. Oper. Res. Soc..

[6]  Yurii Nesterov,et al.  Introductory Lectures on Convex Optimization - A Basic Course , 2014, Applied Optimization.

[7]  Yurii Nesterov,et al.  Cubic regularization of Newton method and its global performance , 2006, Math. Program..

[8]  Peter Deuflhard,et al.  Affine conjugate adaptive Newton methods for nonlinear elastomechanics , 2007, Optim. Methods Softw..

[9]  Serge Gratton,et al.  Recursive Trust-Region Methods for Multiscale Nonlinear Optimization , 2008, SIAM J. Optim..

[10]  Yurii Nesterov,et al.  Accelerating the cubic regularization of Newton’s method on convex problems , 2005, Math. Program..

[11]  Nicholas I. M. Gould,et al.  On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems , 2010, SIAM J. Optim..

[12]  Nicholas I. M. Gould,et al.  Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results , 2011, Math. Program..

[13]  Nicholas I. M. Gould,et al.  On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming , 2011, SIAM J. Optim..

[14]  Nicholas I. M. Gould,et al.  Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity , 2011, Math. Program..

[15]  P. Toint,et al.  An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity , 2012 .

[16]  Nicholas I. M. Gould,et al.  Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization , 2012, Optim. Methods Softw..

[17]  Nicholas I. M. Gould,et al.  Complexity bounds for second-order optimality in unconstrained optimization , 2012, J. Complex..

[18]  Florian Jarre,et al.  On Nesterov's smooth Chebyshev–Rosenbrock function , 2013, Optim. Methods Softw..