Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

The unconstrained minimization of a sufficiently smooth objective function $f(x)$ is considered, for which derivatives up to order $p$, $p\geq 2$, are assumed to be available. An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order $p$ and that is guaranteed to find a first- and second-order critical point in at most $O \left(\max\left( \epsilon_1^{-\frac{p+1}{p}}, \epsilon_2^{-\frac{p+1}{p-1}} \right) \right)$ function and derivatives evaluations, where $\epsilon_1$ and $\epsilon_2 >0$ are prescribed first- and second-order optimality tolerances. Our approach extends the method in Birgin et al. (2016) to finding second-order critical points, and establishes the novel complexity bound for second-order criticality under identical problem assumptions as for first-order, namely, that the $p$-th derivative tensor is Lipschitz continuous and that $f(x)$ is bounded from below. The evaluation-complexity bound for second-order criticality improves on all such known existing results.

[1]  Nicholas I. M. Gould,et al.  Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results , 2011, Math. Program..

[2]  José Mario Martínez,et al.  Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization , 2017, J. Glob. Optim..

[3]  Tengyu Ma,et al.  Finding approximate local minima faster than gradient descent , 2016, STOC.

[4]  José Mario Martínez,et al.  The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization , 2017, SIAM J. Optim..

[5]  José Mario Martínez,et al.  Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models , 2017, Math. Program..

[6]  Yair Carmon,et al.  Gradient Descent Efficiently Finds the Cubic-Regularized Non-Convex Newton Step , 2016, ArXiv.

[7]  Nicholas I. M. Gould,et al.  Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity , 2011, Math. Program..

[8]  Jin Yun Yuan,et al.  Nonlinear Stepsize Control Algorithms: Complexity Bounds for First- and Second-Order Optimality , 2016, J. Optim. Theory Appl..

[9]  Nicholas I. M. Gould,et al.  Complexity bounds for second-order optimality in unconstrained optimization , 2012, J. Complex..

[10]  Yurii Nesterov,et al.  Cubic regularization of Newton method and its global performance , 2006, Math. Program..

[11]  D. Gleich TRUST REGION METHODS , 2017 .

[12]  Nicholas I. M. Gould,et al.  How much patience do you have? A worst-case perspective on smooth nonconvex optimization , 2012 .

[13]  Philippe L. Toint,et al.  Second-order optimality and beyond: characterization and evaluation complexity in nonconvex convexly-constrained optimization , 2016 .

[14]  Daniel P. Robinson,et al.  A trust region algorithm with a worst-case iteration complexity of O(ϵ-3/2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{docume , 2016, Mathematical Programming.