Energy-Efficient Embedded Inference of SVMs on FPGA

We propose an energy-efficient embedded binarized Support Vector Machine (eBSVM) architecture and present its implementation on low-power FPGA accelerator. With binarized input activations and output weights, the dot product operation (float-point multiplications and additions) can be replaced by bitwise XNOR and popcount operations, respectively. The proposed accelerator computes the two binarized vectors using hamming weights, resulting in reduced execution time and energy consumption. Evaluation results show that eBSVM demonstrates performance and performance-per-Watt on MNIST and CIFAR-10 datasets compared to its fixed point (FP) counterpart implemented in CPU and GPU with small accuracy degradation.

[1]  Wayne Luk,et al.  Hardware Acceleration for Machine Learning , 2017, 2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI).

[2]  Shuai Li,et al.  Energy-Efficient Architecture for FPGA-based Deep Convolutional Neural Networks with Binary Weights , 2018, 2018 IEEE 23rd International Conference on Digital Signal Processing (DSP).

[3]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[4]  Ran El-Yaniv,et al.  Binarized Neural Networks , 2016, NIPS.

[5]  Yoshua Bengio,et al.  BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.

[6]  Eriko Nurvitadhi,et al.  Accelerating Binarized Neural Networks: Comparison of FPGA, CPU, GPU, and ASIC , 2016, 2016 International Conference on Field-Programmable Technology (FPT).

[7]  Igor Carron,et al.  XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016 .