Reducing the influence of tiny normwise relative errors on performance profiles

It is a widespread but little-noticed phenomenon that the normwise relative error ‖x - y‖/‖x‖ of vectors x and y of floating point numbers of the same precision, where y is an approximation to x, can be many orders of magnitude smaller than the unit roundoff. We analyze this phenomenon and show that in the ∞-norm it happens precisely when x has components of widely varying magnitude and every component of x of largest magnitude agrees with the corresponding component of y. Performance profiles are a popular way to compare competing algorithms according to particular measures of performance. We show that performance profiles based on normwise relative errors can give a misleading impression due to the influence of zero or tiny normwise relative errors. We propose a transformation that reduces the influence of these extreme errors in a controlled manner, while preserving the monotonicity of the underlying data and leaving the performance profile unchanged at its left end-point. Numerical examples with both artificial and genuine data illustrate the benefits of the transformation.

[1]  Nicholas J. Higham,et al.  Analysis of the Cholesky Method with Iterative Refinement for Solving the Symmetric Definite Generalized Eigenproblem , 2001, SIAM J. Matrix Anal. Appl..

[2]  James Demmel,et al.  Error bounds from extra-precise iterative refinement , 2006, TOMS.

[3]  Awad H. Al-Mohy,et al.  Computing the Fréchet Derivative of the Matrix Logarithm and Estimating the Condition Number , 2013, SIAM J. Sci. Comput..

[4]  J. H. Wilkinson The algebraic eigenvalue problem , 1966 .

[5]  Nicholas J. Higham,et al.  Functions of matrices - theory and computation , 2008 .

[6]  W. Prager,et al.  Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides , 1964 .

[7]  Nicholas J. Higham,et al.  An Improved Schur-Padé Algorithm for Fractional Powers of a Matrix and Their Fréchet Derivatives , 2013, SIAM J. Matrix Anal. Appl..

[8]  Awad H. Al-Mohy,et al.  Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm , 2012, SIAM J. Sci. Comput..

[9]  James Demmel,et al.  LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version , 2012, SIAM J. Matrix Anal. Appl..

[10]  Nicholas J. Higham,et al.  The Scaling and Squaring Method for the Matrix Exponential Revisited , 2005, SIAM J. Matrix Anal. Appl..

[11]  N. Higham Functions of Matrices: Theory and Computation (Other Titles in Applied Mathematics) , 2008 .

[12]  Nicholas J. Higham,et al.  Matlab guide , 2000 .

[13]  Siegfried M. Rump,et al.  Iterative refinement for ill-conditioned linear systems , 2009 .

[14]  B. Parlett The Symmetric Eigenvalue Problem , 1981 .

[15]  J DingleNicholas,et al.  Reducing the influence of tiny normwise relative errors on performance profiles , 2013 .

[16]  Nicholas J. Higham,et al.  A Schur-Padé Algorithm for Fractional Powers of a Matrix , 2011, SIAM J. Matrix Anal. Appl..

[17]  Awad H. Al-Mohy,et al.  Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators , 2011, SIAM J. Sci. Comput..

[18]  Nicholas J. Higham,et al.  INVERSE PROBLEMS NEWSLETTER , 1991 .

[19]  N. Higham Iterative refinement for linear systems and LAPACK , 1997 .

[20]  Todd S. Munson,et al.  Optimality Measures for Performance Profiles , 2006, SIAM J. Optim..

[21]  Awad H. Al-Mohy,et al.  A New Scaling and Squaring Algorithm for the Matrix Exponential , 2009, SIAM J. Matrix Anal. Appl..

[22]  Jorge J. Moré,et al.  Digital Object Identifier (DOI) 10.1007/s101070100263 , 2001 .

[23]  Nicholas J. Higham,et al.  Matlab guide, Second Edition , 2005 .