The recently developed quasi-Newton method for nonlinear programming has local and superlinear convergence properties. But global convergences can not be ensured. However, the quasi-Newton version of the multiplier method, though less efficient, shares with the classical penalty methods some global convergence properties. We observe that the difference between these two methods turns out to be merely on their stepsize strategies. Based on this point of view, we present a new method, in which the stepsizes for both primal variable and Lagrange multipliers are strategically determined such that the method behaves like a quasi-Newton version of the multiplier method when our estimates are poor, but it is as efficient as a quasi-Newton method locally. Because of these features, the method converges globally and at the same time superlinearly.
[1]
Shih-Ping Han.
Dual variable metric algorithms for constrained optimization
,
1975,
1976 IEEE Conference on Decision and Control including the 15th Symposium on Adaptive Processes.
[2]
Shih-Ping Han.
A globally convergent method for nonlinear programming
,
1975
.
[3]
Shih-Ping Han,et al.
Superlinearly convergent variable metric algorithms for general nonlinear programming problems
,
1976,
Math. Program..
[4]
Shih-Ping Han,et al.
Penalty Lagrangian Methods Via a Quasi-Newton Approach
,
1979,
Math. Oper. Res..
[5]
M. Powell.
A New Algorithm for Unconstrained Optimization
,
1970
.
[6]
R. Fletcher.
An Ideal Penalty Function for Constrained Optimization
,
1975
.