Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature

A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization is considered, which uses a model consisting of a Taylor expansion of arbitrary degree and regularization term involving a possibly non-smooth norm. It is shown that the nonsmoothness of the norm does not affect the O( −(p+1)/p 1 ) upper bound on evaluation complexity for finding first-order 1-approximate minimizers using p derivatives, and that this result does not hinge on the equivalence of norms in IR. It is also shown that, if p = 2, the bound of O( −3 2 ) evaluations for finding second-order 2-approximate minimizers still holds for a variant of AR1pGN named AR2GN, despite the possibly non-smooth nature of the regularization term. Moreover, the adaptation of the existing theory for handling the non-smoothness results in an interesting modification of the subproblem termination rules, leading to an even more compact complexity analysis. In particular, it is shown when the Newton’s step is acceptable for an adaptive regularization method. The approximate minimization of quadratic polynomials regularized with non-smooth norms is then discussed, and a new approximate second-order necessary optimality condition is derived for this case. An specialized algorithm is then proposed to enforce the firstand second-order conditions that are strong enough to ensure the existence of a suitable step in AR1pGN (when p = 2) and in AR2GN, and its iteration complexity is analyzed.

[1]  P. Toint,et al.  Trust-region and other regularisations of linear least-squares problems , 2009 .

[2]  J. Dussault Simple unified convergence proofs for Trust Region and a new ARC variant , 2015 .

[3]  J. Hiriart-Urruty,et al.  Convex analysis and minimization algorithms , 1993 .

[4]  Yu. S. Ledyaev,et al.  Nonsmooth analysis and control theory , 1998 .

[5]  Yurii Nesterov,et al.  Cubic regularization of Newton method and its global performance , 2006, Math. Program..

[6]  Y. Nesterov Gradient methods for minimizing composite objective function , 2007 .

[7]  Philippe L. Toint,et al.  WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION , 2017, Proceedings of the International Congress of Mathematicians (ICM 2018).

[8]  A concise second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models , 2019 .

[9]  Hong Wang,et al.  Partially separable convexly-constrained optimization with non-Lipschitzian singularities and its complexity , 2017, ArXiv.

[10]  José Mario Martínez,et al.  On High-order Model Regularization for Constrained Optimization , 2017, SIAM J. Optim..

[11]  Marco Sciandrone,et al.  A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques , 2016, Optim. Methods Softw..

[12]  P. Toint,et al.  Adaptive cubic overestimation methods for unconstrained optimization. Part I: motivation, convergence and numerical results , 2008 .

[13]  Serge Gratton,et al.  A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds , 2020, Math. Program..

[14]  R. Fletcher A model algorithm for composite nondifferentiable optimization problems , 1982 .

[15]  E. Simon,et al.  An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity , 2019, Math. Program..

[16]  P. Toint,et al.  Adaptive cubic overestimation methods for unconstrained optimization , 2007 .

[17]  S. Bellavia,et al.  Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization , 2018, SIAM J. Optim..

[18]  Katta G. Murty,et al.  Some NP-complete problems in quadratic and nonlinear programming , 1987, Math. Program..

[19]  J. Lee,et al.  Convergence to Second-Order Stationarity for Constrained Non-Convex Optimization , 2018, 1810.02024.

[20]  Serge Gratton,et al.  On the use of the energy norm in trust-region and adaptive cubic regularization subproblems , 2017, Comput. Optim. Appl..

[21]  X. Chen,et al.  High-order evaluation complexity for convexly-constrained optimization with non-Lipschitzian group sparsity terms , 2019, Math. Program..

[22]  Serge Gratton,et al.  Recursive Trust-Region Methods for Multiscale Nonlinear Optimization , 2008, SIAM J. Optim..

[23]  Ya-xiang Yuan,et al.  On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization , 2015, Math. Program..

[24]  Marco Sciandrone,et al.  On the use of iterative methods in cubic regularization for unconstrained optimization , 2015, Comput. Optim. Appl..

[25]  Dominique Orban,et al.  A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization , 2021 .

[26]  Nicholas I. M. Gould,et al.  On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization , 2012, SIAM J. Optim..

[27]  Katya Scheinberg,et al.  Convergence Rate Analysis of a Stochastic Trust-Region Method via Supermartingales , 2016, INFORMS Journal on Optimization.

[28]  Nicholas I. M. Gould,et al.  Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints , 2018, SIAM J. Optim..

[29]  José Mario Martínez,et al.  Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models , 2017, Math. Program..

[30]  Stephen J. Wright,et al.  A proximal method for composite minimization , 2008, Mathematical Programming.

[31]  M. Knossalla CONCEPTS ON GENERALIZED ε-SUBDIFFERENTIALS FOR MINIMIZING LOCALLY LIPSCHITZ CONTINUOUS FUNCTIONS , 2017 .

[32]  Nicholas I. M. Gould,et al.  On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming , 2011, SIAM J. Optim..

[33]  P. Toint,et al.  Strong Evaluation Complexity Bounds for Arbitrary-Order Optimization of Nonconvex Nonsmooth Composite Functions , 2020, 2001.10802.