Finite precision analysis of support vector machine classification in logarithmic number systems

In this paper we present an analysis of the minimal hardware precision required to implement support vector machine (SVM) classification within a logarithmic number system architecture. Support vector machines are fast emerging as a powerful machine-learning tool for pattern recognition, decision-making and classification. Logarithmic number systems (LNS) utilize the property of logarithmic compression for numerical operations. Within the logarithmic domain, multiplication and division can be treated simply as addition or subtraction. Hardware computation of these operations is significantly faster with reduced complexity. Leveraging the inherent properties of LNS, we are able to achieve significant savings over double-precision floating point in an implementation of a SVM classification algorithm.

[1]  Thanos Stouraitis,et al.  Analysis of logarithmic number system processors , 1988 .

[2]  Ophir Frieder,et al.  Exploiting parallelism in pattern matching: an information retrieval application , 1991, TOIS.

[3]  Jenq-Neng Hwang,et al.  Finite Precision Error Analysis of Neural Network Hardware Implementations , 1993, IEEE Trans. Computers.

[4]  Tony R. Martinez,et al.  Priority ASOCS , 1994 .

[5]  Tony R. Martinez,et al.  A VLSI implementation of a parallel, self-organizing learning model , 1994, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2 - Conference B: Computer Vision & Image Processing. (Cat. No.94CH3440-5).

[6]  Philip Heng Wai Leong,et al.  A low-power VLSI arrhythmia classifier , 1995, IEEE Trans. Neural Networks.

[7]  Marti A. Hearst Trends & Controversies: Support Vector Machines , 1998, IEEE Intell. Syst..

[8]  Bernhard Schölkopf,et al.  Support Vector methods in learning and feature extraction , 1998 .

[9]  Gert Cauwenberghs,et al.  Learning on Silicon: Adaptive VLSI Neural Systems , 1999 .

[10]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[11]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[12]  Gert Cauwenberghs,et al.  Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines , 2001, NIPS.

[13]  Ralf Herbrich,et al.  Learning Kernel Classifiers: Theory and Algorithms , 2001 .

[14]  Mark G. Arnold Reduced power consumption for MPEG decoding with LNS , 2002, Proceedings IEEE International Conference on Application- Specific Systems, Architectures, and Processors.

[15]  Gert Cauwenberghs,et al.  Silicon Support Vector Machine with On-Line Learning , 2003, Int. J. Pattern Recognit. Artif. Intell..

[16]  Davide Anguita,et al.  A digital architecture for support vector machines: theory, algorithm, and FPGA implementation , 2003, IEEE Trans. Neural Networks.

[17]  Gert Cauwenberghs,et al.  Kerneltron: support vector "machine" in silicon , 2003, IEEE Trans. Neural Networks.

[18]  Bernhard Schölkopf,et al.  A tutorial on support vector regression , 2004, Stat. Comput..

[19]  Christopher J. C. Burges,et al.  A Tutorial on Support Vector Machines for Pattern Recognition , 1998, Data Mining and Knowledge Discovery.