Minimum Precision Requirements of General Margin Hyperplane Classifiers

Margin hyperplane classifiers such as support vector machines have achieved considerable success in various classification tasks. Their simplicity makes them suitable candidates for the design of embedded intelligent systems. Precision is an effective parameter to trade-off accuracy and resource utilization. We present analytical bounds on the precision requirements of general margin hyperplane classifiers. In addition, we propose a principled precision reduction scheme based on the trade-off between input and weight precisions. We present simulation results that support our analysis and illustrate the gains of our approach in terms of reducing resource utilization. For instance, we show that a linear margin classifier with precision assignment dictated by our approach and applied to the “two versus four” task of the MNIST dataset is <inline-formula> <tex-math notation="LaTeX">$\sim 2\times $ </tex-math></inline-formula> more accurate than a standard 8 bits low precision implementation in spite of using <inline-formula> <tex-math notation="LaTeX">$\sim 2\times 10^{4}$ </tex-math></inline-formula> fewer 1 bit full adders and <inline-formula> <tex-math notation="LaTeX">$\sim 2\times 10^{3}$ </tex-math></inline-formula> fewer bits for data and weight representation.

[1]  Yoshua Bengio,et al.  BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 , 2016, ArXiv.

[2]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[3]  Pritish Narayanan,et al.  Deep Learning with Limited Numerical Precision , 2015, ICML.

[4]  Léon Bottou,et al.  Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.

[5]  Yue Wang,et al.  Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions , 2018, ICML.

[6]  Naresh R. Shanbhag,et al.  Finite-precision analysis of the pipelined strength-reduced adaptive filter , 1998, IEEE Trans. Signal Process..

[7]  Igor Carron,et al.  XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016 .

[8]  Paris Smaragdis,et al.  Bitwise Neural Networks , 2016, ArXiv.

[9]  Tao Wang,et al.  Deep learning with COTS HPC systems , 2013, ICML.

[10]  Charbel Sakr,et al.  Minimum precision requirements for the SVM-SGD learning algorithm , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[11]  C. Caraiscos,et al.  A roundoff error analysis of the LMS adaptive algorithm , 1984 .

[12]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[13]  Alfred O. Hero,et al.  Transient behavior of fixed point LMS adaptation , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[14]  Andreas Antoniou,et al.  Analysis of LMS-Newton adaptive filtering algorithms with variable convergence factor , 1995, IEEE Trans. Signal Process..

[15]  Charbel Sakr,et al.  An Analytical Method to Determine Minimum Per-Layer Precision of Deep Neural Networks , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[16]  Charbel Sakr,et al.  Analytical Guarantees on Numerical Precision of Deep Neural Networks , 2017, ICML.

[17]  Wonyong Sung,et al.  Simulation-based word-length optimization method for fixed-point digital signal processing systems , 1995, IEEE Trans. Signal Process..

[18]  Yoshua Bengio,et al.  BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.

[19]  Sun-Yuan Kung,et al.  Low-energy Formulations of Support Vector Machine Kernel Functions for Biomedical Sensor Applications , 2012, Journal of Signal Processing Systems.

[20]  Romuald Rocher,et al.  Accuracy evaluation of fixed-point LMS algorithm , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[21]  Robert W. Brodersen,et al.  An automated floating-point to fixed-point conversion methodology , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[22]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[23]  Song Han,et al.  Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.