A Mathematical Approach Towards Quantization of Floating Point Weights in Low Power Neural Networks

Neural networks are both compute and memory intensive, and consume significant power while inferencing. Bit reduction of weights is one of the key techniques used to make them power and area efficient without degrading performance. In this paper, we show that inferencing accuracy changes insignificantly even when floating-point weights are represented using 10-bits (lower for certain other neural networks), instead of 32-bits. We have considered a set of 8 neural networks. Further, we propose a mathematical formula for finding the optimum number of bits required to represent the exponent of floating point weights, below which the accuracy drops drastically. We also show that mantissa is highly dependent on the number of layers of a neural network and propose a mathematical proof for the same. Our simulation results show that bit reduction gives better throughput, power efficiency, and area efficiency as compared to those of the models with full precision weights.

[1]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[2]  François Chollet,et al.  Xception: Deep Learning with Depthwise Separable Convolutions , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Joel Emer,et al.  Eyeriss: an Energy-efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks Accessed Terms of Use , 2022 .

[4]  Ran El-Yaniv,et al.  Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..

[5]  Yoni Choukroun,et al.  Low-bit Quantization of Neural Networks for Efficient Inference , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).

[6]  Daisuke Miyashita,et al.  LogNet: Energy-efficient neural networks using logarithmic computation , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[7]  Yoshua Bengio,et al.  Neural Networks with Few Multiplications , 2015, ICLR.

[8]  Martin T. Hagan,et al.  Neural network design , 1995 .

[9]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Song Han,et al.  EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[11]  Luca Rigazio,et al.  ShiftCNN: Generalized Low-Precision Architecture for Inference of Convolutional Neural Networks , 2017, ArXiv.

[12]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Benjamin Graham,et al.  Fractional Max-Pooling , 2014, ArXiv.

[14]  Ali Farhadi,et al.  XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.

[15]  William J. Dally,et al.  SCNN: An accelerator for compressed-sparse convolutional neural networks , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).

[16]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, NIPS.