Linearly constrained minimax optimization

We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems are solved in the minimax sense subject to the linear constraints. This ensures a feasible-point algorithm. Further, we introduce local bounds on the solutions of the linear subproblems, the bounds being adjusted automatically, depending on the quality of the linear approximations. It is proved that the algorithm will always converge to the set of stationary points of the problem, a stationary point being defined in terms of the generalized gradients of the minimax objective function. It is further proved that, under mild regularity conditions, the algorithm is identical to a quadratically convergent Newton iteration in its final stages. We demonstrate the performance of the algorithm by solving a number of numerical examples with up to 50 variables, 163 functions, and 25 constraints. We have also implemented a version of the algorithm which is particularly suited for the solution of restricted approximation problems.