Practical convergence conditions for unconstrained optimization

Convergence properties of descent methods are investigated for the case where the usual requirement that an exact line search be performed at each iteration is relaxed. The error at each iteration is measured by the relative decrease in the directional derivative in the search direction. The objective function is assumed to have continuous second derivatives and the eigenvalues of the Hessian are assumed to be bounded above and below by positive constants. Sufficient conditions are given for establishing that a method converges, or that a method converges at a linear rate.These results are used to prove that the order of convergence for a specific conjugate gradient method is linear, provided the error at each iteration is suitably restricted.