Algorithms for constrained optimization

Methods for solving a constrained optimization problem in n variables and m constraints can be divided roughly into four categories that depend on the dimension of the space in which the accompanying algorithm works. Primal methods work in n – m space, penalty methods work in n space, dual and cutting plane methods work in m space, and Lagrangian methods work in n + m space. Each of these approaches are founded on different aspects of NLP theory. Nevertheless, there are strong interconnections between them, both in the final form of implementation and in performance. The rates of convergence of most practical algorithms are determined by the structure of the Hessian of the Lagrangian, much like the structure of the Hessian of the objective function determines the rates of convergence for most unconstrained methods.