Error bounds from extra-precise iterative refinement

We present the design and testing of an algorithm for iterative refinement of the solution of linear equations where the residual is computed with extra precision. This algorithm was originally proposed in 1948 and analyzed in the 1960s as a means to compute very accurate solutions to all but the most ill-conditioned linear systems. However, two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard has essentially removed the first obstacle. To overcome the second obstacle, we show how the application of iterative refinement can be used to compute an error bound in any norm at small cost and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound.We report extensive test results on over 6.2 million matrices of dimensions 5, 10, 100, and 1000. As long as a normwise (componentwise) condition number computed by the algorithm is less than 1/max{10,&nsqrt;}ϵw, the computed normwise (componentwise) error bound is at most 2 max{10, &nsqrt;} · ϵw, and indeed bounds the true error. Here, n is the matrix dimension and ϵw = 2−24 is the working precision. Residuals were computed in double precision (53 bits of precision). In other words, the algorithm always computed a tiny error at negligible extra cost for most linear systems. For worse conditioned problems (which we can detect using condition estimation), we obtained small correct error bounds in over 90% of cases.

[1]  James Hardy Wilkinson,et al.  Rounding errors in algebraic processes , 1964, IFIP Congress.

[2]  Siegfried M. Rump,et al.  Solving Algebraic Problems with High Accuracy , 1983, IMACS World Congress.

[3]  Intel Corportation,et al.  IA-32 Intel Architecture Software Developers Manual , 2004 .

[4]  Siegfried M. Rump,et al.  Structured Perturbations Part I: Normwise Distances , 2003, SIAM J. Matrix Anal. Appl..

[5]  A. Sluis Stability of solutions of linear algebraic systems , 1970 .

[6]  James Demmel,et al.  Design, implementation and testing of extended and mixed precision BLAS , 2000, TOMS.

[7]  Nicholas J. Higham,et al.  INVERSE PROBLEMS NEWSLETTER , 1991 .

[8]  Seppo Linnainmaa,et al.  Software for Doubled-Precision Floating-Point Computations , 1981, TOMS.

[9]  Yuefan Deng,et al.  New trends in high performance computing , 2001, Parallel Computing.

[10]  Nicholas J. Higham,et al.  Experience with a Matrix Norm Estimator , 1990, SIAM J. Sci. Comput..

[11]  J. H. Wilkinson,et al.  Solution of real and complex systems of linear equations , 1966 .

[12]  R. Skeel Iterative refinement implies numerical stability for Gaussian elimination , 1980 .

[13]  Ilse C. F. Ipsen,et al.  On the Sensitivity of Solution Components in Linear Systems of Equations , 1995, SIAM J. Matrix Anal. Appl..

[14]  Linda R. Petzold,et al.  A Subspace Error Estimate for Linear Systems , 2003, SIAM J. Matrix Anal. Appl..

[15]  Cleve B. Moler,et al.  Iterative Refinement in Floating Point , 1967, JACM.

[16]  Jack J. Dongarra,et al.  Basic Linear Algebra Subprograms Technical (Blast) Forum Standard (1) , 2002, Int. J. High Perform. Comput. Appl..

[17]  James Demmel,et al.  On the Complexity of Computing Error Bounds , 2001, Found. Comput. Math..

[18]  Andrzej Kiełbasiński,et al.  Iterative refinement for linear systems in variable-precision arithmetic , 1981 .

[19]  Siegfried M. Rump A class of arbitrarily ill conditioned floating-point matrices , 1991 .

[20]  Nicholas J. Higham,et al.  FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation , 1988, TOMS.

[21]  James Demmel,et al.  SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems , 2003, TOMS.

[22]  V. Strassen Gaussian elimination is not optimal , 1969 .

[23]  Ed Anderson,et al.  LAPACK Users' Guide , 1995 .

[24]  Siegfried M. Rump,et al.  Structured Perturbations Part II: Componentwise Distances , 2003, SIAM J. Matrix Anal. Appl..

[25]  Nicholas J. Highham A survey of condition number estimation for triangular matrices , 1987 .

[26]  Jack Dongarra,et al.  Preface: Basic Linear Algebra Subprograms Technical (Blast) Forum Standard , 2002 .

[27]  John W. Eaton,et al.  Gnu Octave Manual , 2002 .

[28]  R. R. M. Mallock An Electrical Calculating Machine , 1933 .

[29]  J. N. Snyder,et al.  On the improvement of the solutions to a set of simultaneous linear equations using the ILLIAC , 1955 .

[30]  Siegfried M. Rump Ill-Conditioned Matrices Are Componentwise Near to Singularity , 1999, SIAM Rev..

[31]  Jack Dongarra,et al.  ScaLAPACK Users' Guide , 1987 .

[32]  Guido D. Salvucci,et al.  Ieee standard for binary floating-point arithmetic , 1985 .

[33]  Jack Dongarra,et al.  LINPACK Users' Guide , 1987 .

[34]  James Demmel,et al.  Applied Numerical Linear Algebra , 1997 .

[35]  G. Stewart Introduction to matrix computations , 1973 .

[36]  Ansi Ieee,et al.  IEEE Standard for Binary Floating Point Arithmetic , 1985 .

[37]  L. Trefethen,et al.  Average-case stability of Gaussian elimination , 1990 .