Deep Learning Training with Simulated Approximate Multipliers

This paper presents by simulation how approximate multipliers can be utilized to enhance the training performance of convolutional neural networks (CNNs). Approximate multipliers have significantly better performance in terms of speed, power, and area compared to exact multipliers. However, approximate multipliers have an inaccuracy which is defined in terms of the Mean Relative Error (MRE). To assess the applicability of approximate multipliers in enhancing CNN training performance, a simulation for the impact of approximate multipliers error on CNN training is presented. The paper demonstrates that using approximate multipliers for CNN training can significantly enhance the performance in terms of speed, power, and area at the cost of a small negative impact on the achieved accuracy. Additionally, the paper proposes a hybrid training method which mitigates this negative impact on the accuracy. Using the proposed hybrid method, the training can start using approximate multipliers then switches to exact multipliers for the last few epochs. Using this method, the performance benefits of approximate multipliers in terms of speed, power, and area can be attained for a large portion of the training stage. On the other hand, the negative impact on the accuracy is diminished by using the exact multipliers for the last epochs of training.

[1]  Dimitrios Soudris,et al.  Approximate Hybrid High Radix Encoding for Energy-Efficient Inexact Multipliers , 2018, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[2]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[3]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[4]  Kaushik Roy,et al.  Design of power-efficient approximate multipliers for approximate artificial neural networks , 2016, 2016 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[5]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[6]  Toshinori Sato,et al.  Low-Power and High-Speed Approximate Multiplier Design with a Tree Compressor , 2017, 2017 IEEE International Conference on Computer Design (ICCD).

[7]  Jason Cong,et al.  Minimizing Computation in Convolutional Neural Networks , 2014, ICANN.

[8]  Seok-Bum Ko,et al.  Design of Power and Area Efficient Approximate Multipliers , 2017, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[9]  Jason Gu,et al.  A Feature Descriptor Based on Local Normalized Difference for Real-World Texture Classification , 2018, IEEE Transactions on Multimedia.

[10]  Weihong Deng,et al.  Very deep convolutional neural network based image classification using small training sample size , 2015, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR).

[11]  Sherief Reda,et al.  DRUM: A Dynamic Range Unbiased Multiplier for approximate applications , 2015, 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[12]  Kamal El-Sankary,et al.  Impact of Approximate Multipliers on VGG Deep Learning Network , 2018, IEEE Access.