Methods of conjugate directions versus quasi-Newton methods

It is shown that algorithms for minimizing an unconstrained functionF(x), x ∈ En, which are solely methods of conjugate directions can be expected to exhibit only ann or (n−1) step superlinear rate of convergence to an isolated local minimizer. This is contrasted with quasi-Newton methods which can be expected to exhibit every step superlinear convergence. Similar statements about a quadratic rate of convergence hold when a Lipschitz condition is placed on the second derivatives ofF(x).