Quasi-Newton Methods for Unconstrained Optimization

Many techniques for solving general nonlinear unconstrained optimization problems involve iteratively minimizing a model function that satisfies certain interpolation conditions. These conditions provide a model that behaves like the objective function in the neighborhood of the current iterate. The model functions often involve second-order derivatives of the objective function, which can be expensive to calculate. The fundamental idea behind quasi-Newton methods is to maintain an approximation to the Hessian matrix. The practical success of quasi-Newton methods has spurred a great deal of interest and research that has resulted in a considerable number of variations of this idea. The analytical difficulties associated with characterizing the performance of these algorithms means there is a real need for practical testing to support theoretical claims. The goal of this project is to describe, implement, and test these methods in a way that is uniform, systematic, and consistent. In the first part of the paper, we derive several classical quasi-Newton methods, discuss their relative benefits, and show how to implement them. In the second part, we investigate more recent variations, explain their motivation and theory, and analyze their performance.