On the Convergence Rate of Variants of the Conjugate Gradient Algorithm in Finite Precision Arithmetic

We consider three mathematically equivalent variants of the conjugate gradient (CG) algorithm and how they perform in finite precision arithmetic. It was shown in [{\em Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences}, Lin.~Alg.~Appl., 113 (1989), pp.~7-63] that under certain coditions, that {\em may} be satisfied by a finite precision CG computation, the convergence of that computation is like that of exact CG for a matrix with many eigenvalues distributed throughout tiny intervals about the eigenvalues of the given matrix. We determine to what extent each of these variants satisfies the desired conditions, using a set of test problems, and show that there is significant correlation between how well these conditions are satisfied and how well the finite precision computation converges before reaching its ultimately attainable accuracy. We show that for problems where the interval width makes a significant difference in the behavior of exact CG, the different CG variants behave differently in finite precision arithmetic. For problems where the interval width makes little difference or where the convergence of exact CG is essentially governed by the upper bound based on the square root of the condition number of the matrix, the different CG variants converge similarly in finite precision arithmetic until the ultimate level of accuracy is achieved.

[1]  Anne Greenbaum,et al.  Predicting the Behavior of Finite Precision Lanczos and Conjugate Gradient Computations , 2015, SIAM J. Matrix Anal. Appl..

[2]  Gérard Meurant Multitasking the conjugate gradient method on the CRAY X-MP/48 , 1987, Parallel Comput..

[3]  Anthony T. Chronopoulos,et al.  On the efficient implementation of preconditioned s-step conjugate gradient methods on multiprocessors with memory hierarchy , 1989, Parallel Comput..

[4]  Anthony T. Chronopoulos,et al.  s-step iterative methods for symmetric linear systems , 1989 .

[5]  M. Hestenes,et al.  Methods of conjugate gradients for solving linear systems , 1952 .

[6]  A. Greenbaum Comparison of splittings used with the conjugate gradient algorithm , 1979 .

[7]  Gérard Meurant On prescribing the convergence behavior of the conjugate gradient algorithm , 2019, Numerical Algorithms.

[8]  Nicholas J. Higham,et al.  INVERSE PROBLEMS NEWSLETTER , 1991 .

[9]  W. Marsden I and J , 2012 .

[10]  B. Parlett The Symmetric Eigenvalue Problem , 1981 .

[11]  Miroslav Tuma,et al.  The Numerical Stability Analysis of Pipelined Conjugate Gradient Methods: Historical Context and Methodology , 2018, SIAM J. Sci. Comput..

[12]  C. Paige Accuracy and effectiveness of the Lanczos algorithm for the symmetric eigenproblem , 1980 .

[13]  Z. Strakos,et al.  On error estimation in the conjugate gradient method and why it works in finite precision computations. , 2002 .

[14]  Christopher C. Paige,et al.  An Augmented Stability Result for the Lanczos Hermitian Matrix Tridiagonalization Process , 2010, SIAM J. Matrix Anal. Appl..

[15]  Iain S. Duff,et al.  Users' guide for the Harwell-Boeing sparse matrix collection (Release 1) , 1992 .

[16]  Anne Greenbaum,et al.  Using Nonorthogonal Lanczos Vectors in the Computation of Matrix Functions , 1998, SIAM J. Sci. Comput..

[17]  Christopher C. Paige,et al.  The computation of eigenvalues and eigenvectors of very large sparse matrices , 1971 .

[18]  Y. Saad,et al.  Practical Use of Polynomial Preconditionings for the Conjugate Gradient Method , 1985 .

[19]  Emmanuel Agullo,et al.  Analyzing the Effect of Local Rounding Error Propagation on the Maximal Attainable Accuracy of the Pipelined Conjugate Gradient Method , 2016, SIAM J. Matrix Anal. Appl..

[20]  Y. Saad,et al.  Krylov Subspace Methods on Supercomputers , 1989 .

[21]  Wim Vanroose,et al.  Numerically Stable Variants of the Communication-hiding Pipelined Conjugate Gradients Algorithm for the Parallel Solution of Large Scale Symmetric Linear Systems , 2017, ArXiv.

[22]  Jeffrey Cornelis,et al.  Numerically Stable Recurrence Relations for the Communication Hiding Pipelined Conjugate Gradient Method , 2019, IEEE Transactions on Parallel and Distributed Systems.

[23]  A. Greenbaum Estimating the Attainable Accuracy of Recursively Computed Residual Methods , 1997, SIAM J. Matrix Anal. Appl..

[24]  CHRISTOPHER C. PAIGE,et al.  Accuracy of the Lanczos Process for the Eigenproblem and Solution of Equations , 2019, SIAM J. Matrix Anal. Appl..

[25]  Aaron Sidford,et al.  Stability of the Lanczos Method for Matrix Function Approximation , 2017, SODA.

[26]  A. Greenbaum Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences , 1989 .

[27]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[28]  John Van Rosendale Minimizing Inner Product Data Dependencies in Conjugate Gradient Iteration , 1983, ICPP.

[29]  Wim Vanroose,et al.  The Impact of Global Communication Latency at Extreme Scales on Krylov Methods , 2012, ICA3PP.

[30]  Anne Greenbaum,et al.  Iterative methods for solving linear systems , 1997, Frontiers in applied mathematics.

[31]  Wim Vanroose,et al.  Hiding global synchronization latency in the preconditioned Conjugate Gradient algorithm , 2014, Parallel Comput..