HEAP: A Heterogeneous Approximate Floating-Point Multiplier for Error Tolerant Applications

Floating point arithmetic is one of the most commonly used units in nowadays computing systems and is deployed for a wide range of domains and applications. While floating point operators offer high precision calculations, a plethora of applications such as multimedia processing and machine learning tolerate errors and computation imprecision. In a context of limited power budget embedded systems, saving resources and energy with an acceptable precision loss is a challenging design task. Approximate computing is an emerging systems design paradigm that offers promising balance between accuracy on the one hand and power consumption and resource utilization on the other hand. While state of the art approximate techniques offer a wide design space at the operator level, few are the works that consider exploring different techniques to build a heterogeneous comprehensive approximate design. In this paper, we propose HEAP: a heterogeneous approximate floating point multiplier. Based on a design space exploration process, we present an approximation at the transistor level that reduces energy consumption of up to 68%. Experimental study on a set of machine learning applications shows promising results with comparable accuracy to exact multiplier based systems.

[1]  Rob A. Rutenbar,et al.  Floating-point bit-width optimization for low-power signal processing applications , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[2]  Andrew B. Kahng,et al.  Accuracy-configurable adder for approximate arithmetic designs , 2012, DAC Design Automation Conference 2012.

[3]  Wei Zhang,et al.  A low-power accuracy-configurable floating point multiplier , 2014, 2014 IEEE 32nd International Conference on Computer Design (ICCD).

[4]  Fabrizio Lombardi,et al.  Design and Performance Evaluation of Approximate Floating-Point Multipliers , 2016, 2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).

[5]  Bernard Girau,et al.  Fault and Error Tolerance in Neural Networks: A Review , 2017, IEEE Access.

[6]  Ozcan Ozturk,et al.  A Novel Heterogeneous Approximate Multiplier for Low Power and High Performance , 2018, IEEE Embedded Systems Letters.

[7]  Fabrizio Lombardi,et al.  New Metrics for the Reliability of Approximate and Probabilistic Adders , 2013, IEEE Transactions on Computers.

[8]  Hang Zhang,et al.  Low power GPGPU computation with imprecise hardware , 2014, 2014 51st ACM/EDAC/IEEE Design Automation Conference (DAC).

[9]  Kaushik Roy,et al.  Low-Power Digital Signal Processing Using Approximate Adders , 2013, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[10]  Johan Eilert,et al.  Using low precision floating point numbers to reduce memory cost for MP3 decoding , 2004, IEEE 6th Workshop on Multimedia Signal Processing, 2004..

[11]  R. Cmar,et al.  A methodology and design environment for DSP ASIC fixed point refinement , 1999, Design, Automation and Test in Europe Conference and Exhibition, 1999. Proceedings (Cat. No. PR00078).

[12]  Fabrizio Lombardi,et al.  A low-power, high-performance approximate multiplier with configurable partial error recovery , 2014, 2014 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[13]  Fabrizio Lombardi,et al.  Inexact designs for approximate low power addition by cell replacement , 2016, 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[14]  Robert W. Brodersen,et al.  An automated floating-point to fixed-point conversion methodology , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[15]  Kaushik Roy,et al.  IMPACT: IMPrecise adders for low-power approximate computing , 2011, IEEE/ACM International Symposium on Low Power Electronics and Design.

[16]  Kaushik Roy,et al.  Analysis and characterization of inherent application resilience for approximate computing , 2013, 2013 50th ACM/EDAC/IEEE Design Automation Conference (DAC).

[17]  Sofiène Tahar,et al.  Comparative Study of Approximate Multipliers , 2018, ACM Great Lakes Symposium on VLSI.

[18]  Luca Benini,et al.  Approximate associative memristive memory for energy-efficient GPUs , 2015, 2015 Design, Automation & Test in Europe Conference & Exhibition (DATE).

[19]  John Sartori,et al.  Approximate Communication , 2018, ACM Comput. Surv..

[20]  Dong Yu,et al.  Deep Learning: Methods and Applications , 2014, Found. Trends Signal Process..

[21]  Zdenek Vasícek,et al.  Automatic design of approximate circuits by means of multi-objective evolutionary algorithms , 2016, 2016 International Conference on Design and Technology of Integrated Systems in Nanoscale Era (DTIS).

[22]  Fabrizio Lombardi,et al.  Approximate XOR/XNOR-based adders for inexact computing , 2013, 2013 13th IEEE International Conference on Nanotechnology (IEEE-NANO 2013).

[23]  Ing-Chao Lin,et al.  High accuracy approximate multiplier with error correction , 2013, 2013 IEEE 31st International Conference on Computer Design (ICCD).

[24]  Rob A. Rutenbar,et al.  Reducing power by optimizing the necessary precision/range of floating-point arithmetic , 2000, IEEE Trans. Very Large Scale Integr. Syst..

[25]  Tajana Simunic,et al.  RMAC: Runtime Configurable Floating Point Multiplier for Approximate Computing , 2018, ISLPED.

[26]  Arindam Basu,et al.  Low Power Probabilistic Floating Point Multiplier Design , 2011, 2011 IEEE Computer Society Annual Symposium on VLSI.