On the Convergence of a New Conjugate Gradient Algorithm

This paper studies the convergence of a conjugate gradient algorithm proposed in a recent paper by Shanno. It is shown that under loose step length criteria similar to but slightly different from those of Lenard, the method converges to the minimizes of a convex function with a strictly bounded Hessian. Further, it is shown that for general functions that are bounded from below with bounded level sets and bounded second partial derivatives, false convergence in the sense that the sequence of approximations to the minimum converges to a point at which the gradient is bounded away from zero is impossible.