Accelerating the Solution of Linear Systems by Iterative Refinement in Three Precisions

We propose a general algorithm for solving a $n\times n$ nonsingular linear system $Ax = b$ based on iterative refinement with three precisions. The working precision is combined with possibly different precisions for solving for the correction term and for computing the residuals. Via rounding error analysis of the algorithm we derive sufficient conditions for convergence and bounds for the attainable normwise forward error and normwise and componentwise backward errors. Our results generalize and unify many existing rounding error analyses for iterative refinement. With single precision as the working precision, we show that by using LU factorization in IEEE half precision as the solver and calculating the residuals in double precision it is possible to solve $Ax = b$ to full single precision accuracy for condition numbers $\kappa_2(A) \le 10^4$, with the $O(n^3)$ part of the computations carried out entirely in half precision. We show further that by solving the correction equations by GMRES preconditioned by the LU factors the restriction on the condition number can be weakened to $\kappa_2(A) \le 10^8$, although in general there is no guarantee that GMRES will converge quickly. Taking for comparison a standard $Ax = b$ solver that uses LU factorization in single precision, these results suggest that on architectures for which half precision is efficiently implemented it will be possible to solve certain linear systems $Ax = b$ up to twice as fast \emph{and} to greater accuracy. Analogous results are given with double precision as the working precision.

[1]  G. Stewart Introduction to matrix computations , 1973 .

[2]  Alicja Smoktunowicz,et al.  Binary cascades iterative refinement in doubled-mantissa arithmetics , 1984 .

[3]  Edward Anderson Robust Triangular Solves for Use in Condition Estimation , 1991 .

[4]  Nicholas J. Higham,et al.  Accuracy and stability of numerical algorithms, Second Edition , 2002 .

[5]  Jörg Liesen,et al.  The Worst-Case GMRES for Normal Matrices , 2004 .

[6]  Jennifer A. Scott,et al.  A fast and robust mixed-precision solver for the solution of sparse symmetric linear systems , 2010, TOMS.

[7]  Jack Dongarra,et al.  Linear algebra software for large-scale accelerated multicore computing* , 2016, Acta Numerica.

[8]  James Demmel,et al.  Faster Numerical Algorithms via Exception Handling , 1994, IEEE Trans. Computers.

[9]  N. Higham Iterative refinement enhances the stability ofQR factorization methods for solving linear equations , 1991 .

[10]  Jack Dongarra,et al.  Numerical linear algebra on emerging architectures: The PLASMA and MAGMA projects , 2009 .

[11]  J. L. Rigal,et al.  On the Compatibility of a Given Solution With the Data of a Linear System , 1967, JACM.

[12]  W. Prager,et al.  Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides , 1964 .

[13]  Anne Greenbaum,et al.  Any Nonincreasing Convergence Curve is Possible for GMRES , 1996, SIAM J. Matrix Anal. Appl..

[14]  Jack J. Dongarra,et al.  Investigating half precision arithmetic to accelerate dense linear system solvers , 2017, ScalA@SC.

[15]  Andrzej Kiełbasiński,et al.  Iterative refinement for linear systems in variable-precision arithmetic , 1981 .

[16]  H. D. Huskey,et al.  NOTES ON THE SOLUTION OF ALGEBRAIC LINEAR SIMULTANEOUS EQUATIONS , 1948 .

[17]  Eric de Sturler,et al.  Recycling Krylov Subspaces for Sequences of Linear Systems , 2006, SIAM J. Sci. Comput..

[18]  Cleve B. Moler,et al.  Iterative Refinement in Floating Point , 1967, JACM.

[19]  Willard L. Miranker,et al.  Fast Hybrid Solution of Algebraic Systems , 1990, SIAM J. Sci. Comput..

[20]  M. SIAMJ.,et al.  NEWTON’S METHOD IN FLOATING POINT ARITHMETIC AND ITERATIVE REFINEMENT OF GENERALIZED EIGENVALUE PROBLEMS∗ , 1999 .

[21]  J. Dongarra,et al.  Exploiting the Performance of 32 bit Floating Point Arithmetic in Obtaining 64 bit Accuracy (Revisiting Iterative Refinement for Linear Systems) , 2006, ACM/IEEE SC 2006 Conference (SC'06).

[22]  J. Demmel,et al.  Solving Sparse Linear Systems with Sparse Backward Error , 2015 .

[23]  Jennifer A. Scott,et al.  Chebyshev acceleration of iterative refinement , 2014, Numerical Algorithms.

[24]  Joseph R. Shinnerl,et al.  On the Stability of Cholesky Factorization for Symmetric Quasidefinite Systems , 1996, SIAM J. Matrix Anal. Appl..

[25]  James Hardy Wilkinson,et al.  Rounding errors in algebraic processes , 1964, IFIP Congress.

[26]  N. Higham Iterative refinement for linear systems and LAPACK , 1997 .

[27]  H. Wozniakowski,et al.  Iterative refinement implies numerical stability , 1977 .

[28]  Miroslav Rozlozník,et al.  Modified Gram-Schmidt (MGS), Least Squares, and Backward Stability of MGS-GMRES , 2006, SIAM J. Matrix Anal. Appl..

[29]  Z. Zlatev Use of Iterative Refinement in the Solution of Sparse Linear Systems , 1982 .

[30]  I. Duff,et al.  Using FGMRES to obtain backward stability in mixed precision , 2008 .

[31]  R. Skeel Iterative refinement implies numerical stability for Gaussian elimination , 1980 .

[32]  Nicholas J. Higham,et al.  A New Analysis of Iterative Refinement and Its Application to Accurate Solution of Ill-Conditioned Sparse Linear Systems , 2017, SIAM J. Sci. Comput..

[33]  Serge Gratton,et al.  A Note on GMRES Preconditioned by a Perturbed L D LT Decomposition with Static Pivoting , 2007, SIAM J. Sci. Comput..

[34]  J. H. Wilkinson Modern Error Analysis , 1971 .