Globally convergent Newton methods for constrained optimization using differentiable exact penalty functions

In this paper we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and local minima, and, when inequality constraints are present, the necessity to solve a quadratic programming problem at each interation. We show that all these drawbacks can be overcome to a great extent without sacrificing the superlinear convergence rate by making use of exact differentiable penalty functions introduced by Di Pillo and Grippo [1]. We also demonstrate a close relationship between the class of penalty functions of Di Pillo and Grippo and the class of Fletcher [12].