A Hybrid Method for Nonlinear Programming

The recently developed quasi-Newton method for nonlinear programming has local and superlinear convergence properties. But global convergences can not be ensured. However, the quasi-Newton version of the multiplier method, though less efficient, shares with the classical penalty methods some global convergence properties. We observe that the difference between these two methods turns out to be merely on their stepsize strategies. Based on this point of view, we present a new method, in which the stepsizes for both primal variable and Lagrange multipliers are strategically determined such that the method behaves like a quasi-Newton version of the multiplier method when our estimates are poor, but it is as efficient as a quasi-Newton method locally. Because of these features, the method converges globally and at the same time superlinearly.