Descentwise inexact proximal algorithms for smooth optimization

The proximal method is a standard regularization approach in optimization. Practical implementations of this algorithm require (i) an algorithm to compute the proximal point, (ii) a rule to stop this algorithm, (iii) an update formula for the proximal parameter. In this work we focus on (ii), when smoothness is present—so that Newton-like methods can be used for (i): we aim at giving adequate stopping rules to reach overall efficiency of the method.Roughly speaking, usual rules consist in stopping inner iterations when the current iterate is close to the proximal point. By contrast, we use the standard paradigm of numerical optimization: the basis for our stopping test is a “sufficient” decrease of the objective function, namely a fraction of the ideal decrease. We establish convergence of the algorithm thus obtained and we illustrate it on some ill-conditioned problems. The experiments show that combining the proposed inexact proximal scheme with a standard smooth optimization algorithm improves the numerical behaviour of the latter for those ill-conditioned problems.

[1]  A. Rondepierre,et al.  A proximal approach to the inversion of ill-conditioned matrices , 2009, 0904.0703.

[2]  Jorge J. Moré,et al.  Recent Developments in Algorithms and Software for Trust Region Methods , 1982, ISMP.

[3]  M. Todd On Convergence Properties of Algorithms for Unconstrained Minimization , 1989 .

[4]  Paulo J. S. Silva,et al.  Inexact Proximal Point Algorithms and Descent Methods in Optimization , 2005 .

[5]  L. Armijo Minimization of functions having Lipschitz continuous first partial derivatives. , 1966 .

[6]  Claude Lemaréchal,et al.  Convergence of some algorithms for convex minimization , 1993, Math. Program..

[7]  Claude Lemaréchal,et al.  Some numerical experiments with variable-storage quasi-Newton algorithms , 1989, Math. Program..

[8]  R. Bellman,et al.  A NUMERICAL INVERSION OF THE LAPLACE TRANSFORM , 1963 .

[9]  John E. Dennis,et al.  Numerical methods for unconstrained optimization and nonlinear equations , 1983, Prentice Hall series in computational mathematics.

[10]  J. F. Price,et al.  An effective algorithm for minimization , 1967 .

[11]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[12]  Martin Grötschel,et al.  Mathematical Programming The State of the Art, XIth International Symposium on Mathematical Programming, Bonn, Germany, August 23-27, 1982 , 1983, ISMP.

[13]  R. Tyrrell Rockafellar,et al.  Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming , 1976, Math. Oper. Res..

[14]  Nicholas I. M. Gould,et al.  CUTE: constrained and unconstrained testing environment , 1995, TOMS.

[15]  Liqun Qi,et al.  A nonsmooth version of Newton's method , 1993, Math. Program..

[16]  M. V. Solodovy,et al.  A Hybrid Projection{proximal Point Algorithm , 1998 .

[17]  J. Nocedal Updating Quasi-Newton Matrices With Limited Storage , 1980 .

[18]  William W. Hager,et al.  Self-adaptive inexact proximal point methods , 2008, Comput. Optim. Appl..

[19]  M. Solodov,et al.  A UNIFIED FRAMEWORK FOR SOME INEXACT PROXIMAL POINT ALGORITHMS , 2001 .

[20]  Kenneth Levenberg A METHOD FOR THE SOLUTION OF CERTAIN NON – LINEAR PROBLEMS IN LEAST SQUARES , 1944 .

[21]  P. Wolfe Convergence Conditions for Ascent Methods. II , 1969 .

[22]  Claude Lemaréchal,et al.  Variable metric bundle methods: From conceptual to implementable forms , 1997, Math. Program..