Impact of Approximate Multipliers on VGG Deep Learning Network

This paper presents a study on the applicability of using approximate multipliers to enhance the performance of the VGGNet deep learning network. Approximate multipliers are known to have reduced power, area, and delay with the cost of an inaccuracy in output. Improving the performance of the VGGNet in terms of power, area, and speed can be achieved by replacing exact multipliers with approximate multipliers as demonstrated in this paper. The simulation results show that approximate multiplication has a very little impact on the accuracy of VGGNet. However, using approximate multipliers can achieve significant performance gains. The simulation was completed using different generated error matrices that mimic the inaccuracy that approximate multipliers introduce to the data. The impact of various ranges of the mean relative error and the standard deviation was tested. The well-known data sets CIFAR-10 and CIFAR-100 were used for testing the network’s classification accuracy. The impact on the accuracy was assessed by simulating approximate multiplication in all the layers in the first set of tests, and in selective layers in the second set of tests. Using approximate multipliers in all the layers leads to very little impact on the network’s accuracy. In addition, an alternative approach is to use a hybrid of exact and approximate multipliers. In the hybrid approach, 39.14% of the deeper layer’s multiplications can be approximate while having a reduced negligible impact on the network’s accuracy.

[1]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[2]  Seok-Bum Ko,et al.  Design of Power and Area Efficient Approximate Multipliers , 2017, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[3]  Sherief Reda,et al.  DRUM: A Dynamic Range Unbiased Multiplier for approximate applications , 2015, 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[4]  Mehdi Kamal,et al.  RoBA Multiplier: A Rounding-Based Approximate Multiplier for High-Speed yet Energy-Efficient Digital Signal Processing , 2017, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[5]  Weihong Deng,et al.  Very deep convolutional neural network based image classification using small training sample size , 2015, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR).

[6]  Kaushik Roy,et al.  Design of power-efficient approximate multipliers for approximate artificial neural networks , 2016, 2016 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[7]  Vivienne Sze,et al.  Efficient Processing of Deep Neural Networks: A Tutorial and Survey , 2017, Proceedings of the IEEE.

[8]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[9]  Dimitrios Soudris,et al.  Design-Efficient Approximate Multiplication Circuits Through Partial Product Perforation , 2016, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[10]  Dimitrios Soudris,et al.  Approximate Hybrid High Radix Encoding for Energy-Efficient Inexact Multipliers , 2018, IEEE Transactions on Very Large Scale Integration (VLSI) Systems.

[11]  Yen-Cheng Kuan,et al.  A Reconfigurable Streaming Deep Convolutional Neural Network Accelerator for Internet of Things , 2017, IEEE Transactions on Circuits and Systems I: Regular Papers.

[12]  Toshinori Sato,et al.  Low-Power and High-Speed Approximate Multiplier Design with a Tree Compressor , 2017, 2017 IEEE International Conference on Computer Design (ICCD).