On the method of multipliers for mathematical programming problems

In this paper, the numerical solution of the basic problem of mathematical programming is considered. This is the problem of minimizing a functionf(x) subject to a constraint ϕ(x)=0. Here,f is a scalar,x is ann-vector, and ϕ is aq-vector, withq0 is the penalty constant.Previously, the augmented penalty functionW(x, λ,k) was used by Hestenes in his method of multipliers. In Hestenes' version, the method of multipliers involves cycles, in each of which the multiplier and the penalty constant are held constant. After the minimum of the augmented penalty function is achieved in any given cycle, the multiplier λ is updated, while the penalty constantk is held unchanged.In this paper, two modifications of the method of multipliers are presented in order to improve its convergence characteristics. The improved convergence is achieved by (i) increasing the updating frequency so that the number of iterations in a cycle is shortened to ΔN=1 for the ordinary-gradient algorithm and the modified-quasilinearization algorithm and ΔN=n for the conjugate-gradient algorithm, (ii) imbedding Hestenes' updating rule for the multiplier λ into a one-parameter family and determining the scalar parameter β so that the error in the optimum condition is minimized, and (iii) updating the penalty constantk so as to cause some desirable effect in the ordinary-gradient algorithm, the conjugate-gradient algorithm, and the modified-quasilinearization algorithm. For the sake of identification, Hestenes' method of multipliers is called Method MM-1, the modification including (i) and (ii) is called Method MM-2, and the modification including (i), (ii), (iii) is called Method MM-3.Evaluation of the theory is accomplished with seven numerical examples. The first example pertains to a quadratic function subject to linear constraints. The remaining examples pertain to non-quadratic functions subject to nonlinear constraints. Each example is solved with the ordinary-gradient algorithm, the conjugate-gradient algorithm, and the modified-quasilinearization algorithm, which are employed in conjunction with Methods MM-1, MM-2, and MM-3.The numerical results show that (a) for given penalty constantk, Method MM-2 generally exhibits faster convergence than Method MM-1, (b) in both Methods MM-1 and MM-2, the number of iterations for convergence has a minimum with respect tok, and (c) the number of iterations for convergence of Method MM-3 is close to the minimum with respect tok of the number of iterations for convergence of Method MM-2. In this light, Method MM-3 has very desirable characteristics.