Fixed-Posit: A Floating-Point Representation for Error-Resilient Applications

Today, almost all computer systems use IEEE-754 floating point to represent real numbers. Recently, posit was proposed as an alternative to IEEE-754 floating point as it has better accuracy and a larger dynamic range. The configurable nature of posit, with varying number of regime and exponent bits, has acted as a deterrent to its adoption. To overcome this shortcoming, we propose fixed-posit representation where the number of regime and exponent bits are fixed, and present the design of a fixed-posit multiplier. We evaluate the fixed-posit multiplier on error-resilient applications of AxBench and OpenBLAS benchmarks as well as neural networks. The proposed fixed-posit multiplier has 47%, 38.5%, 22% savings for power, area and delay respectively when compared to posit multipliers and up to 70%, 66%, 26% savings in power, area and delay respectively when compared to 32-bit IEEE-754 multiplier. These savings are accompanied with minimal output quality loss (1.2% average relative error) across OpenBLAS and AxBench workloads. Further, for neural networks like ResNet-18 on ImageNet we observe a negligible accuracy loss (0.12%) on using the fixed-posit multiplier.

[1]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[2]  Florent de Dinechin,et al.  Reflections on 10 Years of FloPoCo , 2019, ARITH.

[3]  Fan Wu,et al.  ApproxLP: Approximate Multiplication with Linearization and Iterative Error Control , 2019, 2019 56th ACM/IEEE Design Automation Conference (DAC).

[4]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[5]  Luca Benini,et al.  A transprecision floating-point platform for ultra-low power computing , 2017, 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[6]  Hadi Esmaeilzadeh,et al.  AxBench: A Multiplatform Benchmark Suite for Approximate Computing , 2017, IEEE Design & Test.

[7]  Jean Marie Linhart,et al.  Mata Matters: Overflow, Underflow and the IEEE Floating-Point Format , 2008 .

[8]  James Demmel,et al.  IEEE Standard for Floating-Point Arithmetic , 2008 .

[9]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Harish Patil,et al.  Pin: building customized program analysis tools with dynamic instrumentation , 2005, PLDI '05.

[11]  V. Kamakoti,et al.  PERI: A Posit Enabled RISC-V Core , 2019, ArXiv.

[12]  John L. Gustafson,et al.  Performance-Efficiency Trade-off of Low-Precision Numerical Formats in Deep Neural Networks , 2019, Proceedings of the Conference for Next Generation Arithmetic 2019.

[13]  Rainer Leupers,et al.  Parameterized Posit Arithmetic Hardware Generator , 2018, 2018 IEEE 36th International Conference on Computer Design (ICCD).

[14]  Sergio Saponara,et al.  Exploiting Posit Arithmetic for Deep Neural Networks in Autonomous Driving Applications , 2018, 2018 International Conference of Electrical and Electronic Technologies for Automotive.

[15]  John L. Gustafson,et al.  Deep Positron: A Deep Neural Network Using the Posit Number System , 2018, 2019 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[16]  John L. Gustafson,et al.  Beating Floating Point at its Own Game: Posit Arithmetic , 2017, Supercomput. Front. Innov..

[17]  John L. Gustafson,et al.  PositNN Framework: Tapered Precision Deep Learning Inference for the Edge , 2019, 2019 IEEE Space Computing Conference (SCC).