NACU: A Non-Linear Arithmetic Unit for Neural Networks

Reconfigurable architectures targeting neural networks are an attractive option. They allow multiple neural networks of different types to be hosted on the same hardware, in parallel or sequence. Reconfigurability also grants the ability to morph into different micro-architectures to meet varying power-performance constraints. In this context, the need for a reconfigurable non-linear computational unit has not been widely researched. In this work, we present a formal and comprehensive method to select the optimal fixed-point representation to achieve the highest accuracy against the floating-point implementation benchmark. We also present a novel design of an optimised reconfigurable arithmetic unit for calculating non-linear functions. The unit can be dynamically configured to calculate the sigmoid, hyperbolic tangent, and exponential function using the same underlying hardware. We compare our work with the state-of-the-art and show that our unit can calculate all three functions without loss of accuracy.

[1]  Inés del Campo,et al.  An Experimental Study on Nonlinear Function Computation for Neural/Fuzzy Hardware Design , 2007, IEEE Transactions on Neural Networks.

[2]  S. V. Gupta Propagation of Uncertainty , 2014, Encyclopedia of Social Network Analysis and Mining.

[3]  Mitra Mirhassani,et al.  Efficient VLSI Implementation of Neural Networks With Hyperbolic Tangent Activation Function , 2014, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[4]  Peter Nilsson,et al.  A VLSI implementation of logarithmic and exponential functions using a novel parabolic synthesis methodology compared to the CORDIC algorithm , 2011, 2011 20th European Conference on Circuit Theory and Design (ECCTD).

[5]  Arash Ahmadi,et al.  Digital Multiplierless Implementation of Biological Adaptive-Exponential Neuron Model , 2014, IEEE Transactions on Circuits and Systems I: Regular Papers.

[6]  Javier Echanobe,et al.  Controlled accuracy approximation of sigmoid function for efficient FPGA-based implementation of artificial neurons , 2013 .

[7]  Huapeng Wu,et al.  High Speed VLSI Implementation of the Hyperbolic Tangent Sigmoid Function , 2008, 2008 Third International Conference on Convergence and Hybrid Information Technology.

[8]  Srihari Cadambi,et al.  A Massively Parallel, Energy Efficient Programmable Accelerator for Learning and Classification , 2012, TACO.

[9]  Aaron Stillmaker,et al.  Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7 nm , 2017, Integr..

[10]  Peter Nilsson,et al.  Hardware implementation of the exponential function using Taylor series , 2014, 2014 NORCHIP.

[11]  Majid Ahmadi,et al.  Precise digital implementations of hyperbolic tanh and sigmoid function , 2016, 2016 50th Asilomar Conference on Signals, Systems and Computers.

[12]  Muhammad N. Marsono,et al.  Hardware implementation of evolvable block-based neural networks utilizing a cost efficient sigmoid-like activation function , 2014, Neurocomputing.

[13]  Sherief Reda,et al.  Understanding the impact of precision quantization on the accuracy and energy of neural networks , 2016, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.

[14]  Majid Ahmadi,et al.  Efficient hardware implementation of the hyperbolic tangent sigmoid function , 2009, 2009 IEEE International Symposium on Circuits and Systems.

[15]  Rashid Rashidzadeh,et al.  A CORDIC Based Digital Hardware For Adaptive Exponential Integrate and Fire Neuron , 2016, IEEE Transactions on Circuits and Systems I: Regular Papers.

[16]  Ivan Tsmots,et al.  Hardware Implementation of Sigmoid Activation Functions using FPGA , 2019, 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM).