Enlarging the region of convergence of Newton's method for constrained optimization

In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and local minima, and, when inequality constraints are present, the necessity to solve a quadratic programming problem at each iteration. We show that all these drawbacks can be overcome to a great extent without sacrificing the superlinear convergence rate by making use of exact differentiable penalty functions introduced by Di Pillo and Grippo (Ref. 1). We also show that there is a close relationship between the class of penalty functions of Di Pillo and Grippo and the class of Fletcher (Ref. 2), and that the region of convergence of a variation of Newton's method can be enlarged by making use of one of Fletcher's penalty functions.

[1]  Boris Polyak Iterative methods using lagrange multipliers for solving extremal problems with constraints of the equation type , 1970 .

[2]  R. Fletcher,et al.  A Class of Methods for Nonlinear Programming II Computational Experience , 1970 .

[3]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[4]  J. J. Moré,et al.  A Characterization of Superlinear Convergence and its Application to Quasi-Newton Methods , 1973 .

[5]  Stephen M. Robinson,et al.  Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear-programming algorithms , 1974, Math. Program..

[6]  Elijah Polak,et al.  A quadratically convergent primal-dual algorithm with global convergence properties for solving optimization problems with equality constraints , 1975, Math. Program..

[7]  Shih-Ping Han A globally convergent method for nonlinear programming , 1975 .

[8]  Olvi L. Mangasarian,et al.  Superlinearly convergent quasi-newton algorithms for nonlinearly constrained optimization problems , 1976, Math. Program..

[9]  R. Tapia Diagonalized multiplier methods and quasi-Newton methods for constrained optimization , 1977 .

[10]  N. Maratos,et al.  Exact penalty function algorithms for finite dimensional and control optimization problems , 1978 .

[11]  B. N. Pshenichnyi,et al.  Numerical Methods in Extremal Problems. , 1978 .

[12]  M. J. D. Powell,et al.  Algorithms for nonlinear constraints that use lagrangian functions , 1978, Math. Program..

[13]  M. J. D. Powell,et al.  THE CONVERGENCE OF VARIABLE METRIC METHODS FOR NONLINEARLY CONSTRAINED OPTIMIZATION CALCULATIONS , 1978 .

[14]  R. A. Tapia,et al.  QUASI-NEWTON METHODS FOR EQUALITY CONSTRAINED OPTIMIZATION: EQUIVALENCE OF EXISTING METHODS AND A NEW IMPLEMENTATION , 1978 .

[15]  L. Grippo,et al.  A New Class of Augmented Lagrangians in Nonlinear Programming , 1979 .

[16]  Torkel Glad,et al.  A multiplier method with automatic limitation of penalty growth , 1979, Math. Program..

[17]  S. Glad Properties of updating methods for the multipliers in augmented Lagrangians , 1979 .

[18]  David Q. Mayne,et al.  A first order, exact penalty function algorithm for equality constrained optimization problems , 1979, Math. Program..

[19]  Luigi Grippo,et al.  A method for solving equality constrained optimization problems by unconstrained minimization , 1980 .

[20]  C. Lemaréchal,et al.  The watchdog technique for forcing convergence in algorithms for constrained optimization , 1982 .

[21]  Dimitri P. Bertsekas,et al.  Constrained Optimization and Lagrange Multiplier Methods , 1982 .

[22]  M. J. D. Powell,et al.  Variable Metric Methods for Constrained Optimization , 1982, ISMP.