Unum: Adaptive Floating-Point Arithmetic

Usually, arithmetic units represent numeric data-types employing fixed-length representations. For instance, hardware representations of real numbers usually employ fixed-length formats defined by the IEEE Standard 754 (32-bit single-precision, 64-bit double-precision, , floating-point numbers). Fixed-length representations allow simpler and faster arithmetic units than variable-length representations. However, fixed-length representations lack the ability to adapt both their accuracy and dynamic range to the application requirements. As some variable-length representations expose this adaptivity, they allow hardware implementations to exploit this adaptivity. Recently, Unum (universal number) representation has been proposed as an extension of floating-point representations. Unum is a variable-length representation that adapts the bitsize of the representation to the actual numbers being represented and, moreover, Unum associates and propagates accuracy information through arithmetic operations. In this work we compare Unum versus the floating-point representations defined by IEEE Standard 754. We show that Unum arithmetic improves IEEE 754 arithmetic: a) results obtained using Unum arithmetic are more reliable than IEEE 754's results because Unum does not hide accuracy issues, and b) Unum arithmetic units may implement energy-efficient techniques because Unum dynamically adapts the bitsize of the representation to the actual numbers being represented.

[1]  W. Miranker,et al.  The arithmetic of the digital computer: A new approach , 1986 .

[2]  Michael O Lam Dynamic Floating-Point Cancellation Detection (Master’s Degree Scholarly Paper) , 2010 .

[3]  Earl E. Swartzlander,et al.  The Sign/Logarithm Number System , 1975, IEEE Transactions on Computers.

[4]  David Thomas,et al.  The Art in Computer Programming , 2001 .

[5]  Sebastian Hack,et al.  A dynamic program analysis to find floating-point accuracy problems , 2012, PLDI.

[6]  Jie Han,et al.  Approximate computing: An emerging paradigm for energy-efficient design , 2013, 2013 18th IEEE European Test Symposium (ETS).

[7]  Ulrich W. Kulisch,et al.  C++ Toolbox for Verified Computing I , 1995 .

[8]  Peter Y. K. Cheung,et al.  Dual Fixed-Point: An Efficient Alternative to Floating-Point Computation , 2004, FPL.

[9]  James Demmel,et al.  IEEE Standard for Floating-Point Arithmetic , 2008 .

[10]  Siegfried M. Rump,et al.  Algebraic Computation, Numerical Computation and Verified Inclusions , 1988, Trends in Computer Algebra.

[11]  William Kahan,et al.  Pracniques: further remarks on reducing truncation errors , 1965, CACM.

[12]  Willard L. Miranker,et al.  Computer arithmetic in theory and practice , 1981, Computer science and applied mathematics.

[13]  Bronis R. de Supinski,et al.  Abstract: Automatically Adapting Programs for Mixed-Precision Floating-Point Computation , 2013, 2012 SC Companion: High Performance Computing, Networking Storage and Analysis.

[14]  John L. Gustafson,et al.  The End of Error: Unum Computing , 2015 .

[15]  Qiang Xu,et al.  Approximate Computing: A Survey , 2016, IEEE Design & Test.

[16]  James Demmel,et al.  Precimonious: Tuning assistant for floating-point precision , 2013, 2013 SC - International Conference for High Performance Computing, Networking, Storage and Analysis (SC).

[17]  G. William Walster,et al.  Rump's Example Revisited , 2002, Reliab. Comput..

[18]  Arjeh M. Cohen,et al.  Lie algebraic computation , 1996 .

[19]  Ulrich W. Kulisch,et al.  Numerical Toolbox for Verified Computing I , 1993 .

[20]  David W. Matula Fixed-slash and floating-slash rational arithmetic , 1975, 1975 IEEE 3rd Symposium on Computer Arithmetic (ARITH).