On error analysis in arithmetic with varying relative precision
暂无分享,去创建一个
Recently Clenshaw/Olver and Iri/Matsui proposed new floating point arithmetics which seek to eliminate overflows and underflows from most computations. Their common approach is to redistribute the available numbers to spread out the largest and smallest numbers much more thinly than in standard floating point, thus achieving a larger range at the cost of lower precision at the ends of the range. The goal of these arithmetics is to eliminate much of the effort needed to write code which is reliable despite over/under flow. In this paper we argue that for many codes this eliminated effort will reappear in the error analyses needed to ascertain or guarantee the accuracy of the computed solution. Thus reliability with respect to over/under flow has been traded for reliability with respect to roundoff. We also propose a hardware flag, analogous to the “sticky flags” of the IEEE binary floating point standard, to do some of this extra error analysis automatically.
[1] James Demmel. Effects of Underflow on Solving Linear Systems , 1983 .
[2] Guido D. Salvucci,et al. Ieee standard for binary floating-point arithmetic , 1985 .
[3] Matsui Shouichi,et al. An Overflow/Underflow-Free Floating-Point Representation of Numbers. , 1981 .
[4] Xiaomei Yang. Rounding Errors in Algebraic Processes , 1964, Nature.
[5] James Demmel. Underflow and the Reliability of Numerical Software , 1984 .