Inexact-Restoration Method with Lagrangian Tangent Decrease and New Merit Function for Nonlinear Programming

A new inexact-restoration method for nonlinear programming is introduced. The iteration of the main algorithm has two phases. In Phase 1, feasibility is improved explicitly; in Phase 2, optimality is improved on a tangent approximation of the constraints. Trust regions are used for reducing the step when the trial point is not good enough. The trust region is not centered in the current point, as in many nonlinear programming algorithms, but in the intermediate more feasible point. Therefore, in this semifeasible approach, the more feasible intermediate point is considered to be essentially better than the current point. This is the first method in which intermediate-point-centered trust regions are combined with the decrease of the Lagrangian in the tangent approximation to the constraints. The merit function used in this paper is also new: it consists of a convex combination of the Lagrangian and the nonsquared norm of the constraints. The Euclidean norm is used for simplicity, but other norms for measuring infeasibility are admissible. Global convergence theorems are proved, a theoretically justified algorithm for the first phase is introduced, and some numerical insight is given.

[1]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[2]  J. B. Rosen The Gradient Projection Method for Nonlinear Programming. Part I. Linear Constraints , 1960 .

[3]  J. B. Rosen The gradient projection method for nonlinear programming: Part II , 1961 .

[4]  J. C. Heideman,et al.  Sequential gradient-restoration algorithm for the minimization of constrained functions—Ordinary and conjugate gradient versions , 1969 .

[5]  A. V. Levy,et al.  Modifications and extensions of the conjugate gradient-restoration algorithm for mathematical programming problems , 1971 .

[6]  J. B. Rosen TWO-PHASE ALGORITHM FOR NONLINEAR CONSTRAINT PROBLEMS11This research was supported in part by the National Science Foundation grant MCS 76-23311. Support by the Systems Optimization Lab, Department of Operations Research, Stanford University, during the author's sabbatical leave, is also gratefully , 1978 .

[7]  Edward Michael Sims Sequential gradient-restoration algorithm for mathematical programming problems with inequality constraints , 1983 .

[8]  Mordecai Avriel,et al.  Properties of the sequential gradient-restoration algorithm (SGRA), part 1: Introduction and comparison with related methods , 1989 .

[9]  Mordecai Avriel,et al.  Properties of the sequential gradient-restoration algorithm (SGRA), part 2: Convergence analysis , 1989 .

[10]  Ya-Xiang Yuan,et al.  A trust region algorithm for equality constrained optimization , 1990, Math. Program..

[11]  José Mario Martínez,et al.  A trust-region SLCP model algorithm for nonlinear programming , 1997 .

[12]  John E. Dennis,et al.  A Global Convergence Theory for General Trust-Region-Based Algorithms for Equality Constrained Optimization , 1997, SIAM J. Optim..

[13]  José Mario Martínez,et al.  Two-Phase Model Algorithm with Global Convergence for Nonlinear Programming , 1998 .

[14]  José Mario Martínez,et al.  Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters , 1999, Math. Program..

[15]  J. M. Martínez,et al.  Inexact-Restoration Algorithm for Constrained Optimization1 , 2000 .

[16]  José Mario Martínez,et al.  Nonmonotone Spectral Projected Gradient Methods on Convex Sets , 1999, SIAM J. Optim..

[17]  Sven Leyffer,et al.  Nonlinear programming without a penalty function , 2002, Math. Program..