On error analysis in arithmetic with varying relative precision

Recently Clenshaw/Olver and Iri/Matsui proposed new floating point arithmetics which seek to eliminate overflows and underflows from most computations. Their common approach is to redistribute the available numbers to spread out the largest and smallest numbers much more thinly than in standard floating point, thus achieving a larger range at the cost of lower precision at the ends of the range. The goal of these arithmetics is to eliminate much of the effort needed to write code which is reliable despite over/under flow. In this paper we argue that for many codes this eliminated effort will reappear in the error analyses needed to ascertain or guarantee the accuracy of the computed solution. Thus reliability with respect to over/under flow has been traded for reliability with respect to roundoff. We also propose a hardware flag, analogous to the “sticky flags” of the IEEE binary floating point standard, to do some of this extra error analysis automatically.