Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization

At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.

[1]  M. Hestenes Multiplier and gradient methods , 1969 .

[2]  Nicholas I. M. Gould,et al.  Lancelot: A FORTRAN Package for Large-Scale Nonlinear Optimization (Release A) , 1992 .

[3]  José Mario Martínez,et al.  Augmented Lagrangians with possible infeasibility and finite termination for global nonlinear programming , 2014, J. Glob. Optim..

[4]  R. Tyrrell Rockafellar,et al.  A dual approach to solving nonlinear programming problems by unconstrained optimization , 1973, Math. Program..

[5]  T. Coleman,et al.  On the Convergence of Reflective Newton Methods for Large-scale Nonlinear Minimization Subject to Bounds , 1992 .

[6]  José Mario Martínez,et al.  On Augmented Lagrangian Methods with General Lower-Level Constraints , 2007, SIAM J. Optim..

[7]  William W. Hager,et al.  A New Active Set Algorithm for Box Constrained Optimization , 2006, SIAM J. Optim..

[8]  Nicholas I. M. Gould,et al.  CUTEr and SifDec: A constrained and unconstrained testing environment, revisited , 2003, TOMS.

[9]  Lorenz T. Biegler,et al.  On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming , 2006, Math. Program..

[10]  Nicholas I. M. Gould,et al.  Convergence Properties of an Augmented Lagrangian Algorithm for Optimization with a Combination of General Equality and Linear Constraints , 1996, SIAM J. Optim..

[11]  P. Toint,et al.  Global convergence of a class of trust region algorithms for optimization with simple bounds , 1988 .

[12]  José Mario Martínez,et al.  A New Sequential Optimality Condition for Constrained Optimization and Algorithmic Consequences , 2010, SIAM J. Optim..

[13]  Jorge J. Moré,et al.  Benchmarking optimization software with performance profiles , 2001, Math. Program..

[14]  Thomas F. Coleman,et al.  On the convergence of interior-reflective Newton methods for nonlinear minimization subject to bounds , 1994, Math. Program..

[15]  J. M. Martínez,et al.  On sequential optimality conditions for smooth constrained optimization , 2011 .

[16]  Jorge Nocedal,et al.  A Limited Memory Algorithm for Bound Constrained Optimization , 1995, SIAM J. Sci. Comput..

[17]  P. Toint,et al.  Testing a class of methods for solving minimization problems with simple bounds on the variables , 1988 .

[18]  Jorge Nocedal,et al.  Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization , 1997, TOMS.

[19]  Zengxin Wei,et al.  On the Constant Positive Linear Dependence Condition and Its Application to SQP Methods , 1999, SIAM J. Optim..

[20]  José Mario Martínez,et al.  Large-Scale Active-Set Box-Constrained Optimization Method with Spectral Projected Gradients , 2002, Comput. Optim. Appl..

[21]  José Mario Martínez,et al.  Augmented Lagrangian methods under the constant positive linear dependence constraint qualification , 2007, Math. Program..

[22]  R. Andreani,et al.  On sequential optimality conditions for smooth constrained optimization , 2011 .

[23]  Thomas F. Coleman,et al.  An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds , 1993, SIAM J. Optim..

[24]  M. J. D. Powell,et al.  A method for nonlinear constraints in minimization problems , 1969 .

[25]  P. Toint,et al.  A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds , 1991 .

[26]  Nicholas I. M. Gould,et al.  Trust Region Methods , 2000, MOS-SIAM Series on Optimization.

[27]  R. Andreani,et al.  On the Relation between Constant Positive Linear Dependence Condition and Quasinormality Constraint Qualification , 2005 .

[28]  J. M. Martínez,et al.  A Box-Constrained Optimization Algorithm with Negative Curvature Directions and Spectral Projected Gradients , 2001 .

[29]  J. M. Martínez,et al.  Practical active-set Euclidian trust-region method with spectral projected gradients for bound-constrained minimization , 2005 .