A Projected Lagrangian Algorithm for Nonlinear Minimax Optimization

The minimax problem is an unconstrained optimization problem whose objective function is not differentiable everywhere, and hence cannot be solved efficiently by standard techniques for unconstrained optimization. It is well known that the problem can be transformed into a nonlinearly constrained optimization problem with one extra variable, where the objective and constraint functions are continuously differentiable. This equivalent problem has special properties which are ignored if solved by a general-purpose constrained optimization method. The algorithm we present exploits the special structure of the equivalent problem. A direction of search is obtained at each iteration of the algorithm by solving an equality-constrained quadratic programming problem, related to one a projected Lagrangian method might use to solve the equivalent constrained optimization problem. Special Lagrange multiplier estimates are used to form an approximation to the Hessian of the Lagrangian function, which appears in the quadratic program. Analytical Hessians, finite differencing or quasi-Newton updating may be used in the approximation of this matrix. The resulting direction of search is guaranteed to be a descent direction for the minimax objective function. Under mild conditions the algorithms are locally quadratically convergent if analytical Hessians are used.

[1]  J. Ben Rosen,et al.  Pracniques: construction of nonlinear programming test problems , 1965, Commun. ACM.

[2]  E. Cheney Introduction to approximation theory , 1966 .

[3]  M. R. Osborne,et al.  Methods for unconstrained optimization problems , 1968 .

[4]  Yonathan Bard,et al.  Comparison of Gradient Methods for the Solution of Nonlinear Parameter Estimation Problems , 1970 .

[5]  G. Stewart Introduction to matrix computations , 1973 .

[6]  Stephen M. Robinson,et al.  Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear-programming algorithms , 1974, Math. Program..

[7]  Philip E. Gill,et al.  Newton-type methods for unconstrained and linearly constrained optimization , 1974, Math. Program..

[8]  K. Madsen An Algorithm for Minimax Solution of Overdetermined Systems of Non-linear Equations , 1975 .

[9]  Shih-Ping Han A globally convergent method for nonlinear programming , 1975 .

[10]  V. F. Demʹi︠a︡nov,et al.  Introduction to minimax , 1976 .

[11]  M. H. Wright Numerical methods for nonlinearly constrained optimization , 1976 .

[12]  C. Charalambous,et al.  Non-linear minimax optimization as a sequence of least pth optimization with finite values of p , 1976 .

[13]  M. R. Osborne,et al.  Discrete, nonlinear approximation problems in polyhedral norms , 1977 .

[14]  R. Fletcher,et al.  A modified Newton method for minimization , 1977 .

[15]  A. Wierzbicki Lagrangian functions and nondifferentiable optimization , 1978 .

[16]  L. Cromme Strong uniqueness , 1978 .

[17]  A. Conn,et al.  An Efficient Method to Solve the Minimax Problem Directly , 1978 .

[18]  On the Validity of a Nonlinear Programming Method for Solving Minimax Problems. , 1978 .

[19]  M. J. D. Powell,et al.  THE CONVERGENCE OF VARIABLE METRIC METHODS FOR NONLINEARLY CONSTRAINED OPTIMIZATION CALCULATIONS , 1978 .

[20]  Shih-Ping Han Superlinear Convergence of a Minimax Method , 1978 .

[21]  Projected Lagrangian algorithms for nonlinear minimax and l₁ optimization , 1979 .

[22]  G. Watson The Minimax Solution of an Overdetermined System of Non-linear Equations , 1979 .

[23]  Philip E. Gill,et al.  The computation of Lagrange-multiplier estimates for constrained minimization , 1979, Math. Program..

[24]  Danny C. Sorensen,et al.  On the use of directions of negative curvature in a modified newton method , 1979, Math. Program..

[25]  M. Vidyasagar,et al.  An Algorithm for $l_1 $-Norm Minimization with Application to Nonlinear $l_1 $-Approximation , 1979 .

[26]  S. P. Han,et al.  Variable metric methods for minimizing a class of nondifferentiable functions , 1977, Math. Program..