Linear convergence of the conjugate gradient method
暂无分享,去创建一个
There are two procedures for applying the method of conjugate gradients to the problem of minimizing a convex, nonlinear function: the "continued" method, and the "restarted" method in which all the data except the best previous point are discarded, and the procedure is begun a new from that point. It is demonstrated by example that in the absence of the standard initial starting condition on a quadratic function, the continued conjugate gradient method will converge to the solution no better than linearly. Furthermore, it is shown that for a general nonlinear function, the nonrestarted conjugate gradient method converges no worse than linearly.
[1] M. Hestenes,et al. Methods of conjugate gradients for solving linear systems , 1952 .
[2] C. M. Reeves,et al. Function minimization by conjugate gradients , 1964, Comput. J..
[3] K. Ritter,et al. On the Convergence and Rate of Convergence of the Conjugate Gradient Method. , 1971 .