Finite precision analysis of support vector machine classification in logarithmic number systems

In this paper we present an analysis of the minimal hardware precision required to implement support vector machine (SVM) classification within a logarithmic number system architecture. Support vector machines are fast emerging as a powerful machine-learning tool for pattern recognition, decision-making and classification. Logarithmic number systems (LNS) utilize the property of logarithmic compression for numerical operations. Within the logarithmic domain, multiplication and division can be treated simply as addition or subtraction. Hardware computation of these operations is significantly faster with reduced complexity. Leveraging the inherent properties of LNS, we are able to achieve significant savings over double-precision floating point in an implementation of a SVM classification algorithm.

[1]  Tony R. Martinez,et al.  Priority ASOCS , 1994 .

[2]  Ralf Herbrich,et al.  Learning Kernel Classifiers: Theory and Algorithms , 2001 .

[3]  Davide Anguita,et al.  A digital architecture for support vector machines: theory, algorithm, and FPGA implementation , 2003, IEEE Trans. Neural Networks.

[4]  Thanos Stouraitis,et al.  Analysis of logarithmic number system processors , 1988 .

[5]  Philip Heng Wai Leong,et al.  A low-power VLSI arrhythmia classifier , 1995, IEEE Trans. Neural Networks.

[6]  Ophir Frieder,et al.  Exploiting parallelism in pattern matching: an information retrieval application , 1991, TOIS.

[7]  Mark G. Arnold Reduced power consumption for MPEG decoding with LNS , 2002, Proceedings IEEE International Conference on Application- Specific Systems, Architectures, and Processors.

[8]  Marti A. Hearst Trends & Controversies: Support Vector Machines , 1998, IEEE Intell. Syst..

[9]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[10]  Bernhard Schölkopf,et al.  A tutorial on support vector regression , 2004, Stat. Comput..

[11]  Gert Cauwenberghs,et al.  Learning on Silicon: Adaptive VLSI Neural Systems , 1999 .

[12]  J. J. Cupal,et al.  On the cost effectiveness of logarithmic arithmetic for backpropagation training on SIMD processors , 1997, Proceedings of International Conference on Neural Networks (ICNN'97).

[13]  Gert Cauwenberghs,et al.  Silicon Support Vector Machine with On-Line Learning , 2003, Int. J. Pattern Recognit. Artif. Intell..

[14]  Jenq-Neng Hwang,et al.  Finite Precision Error Analysis of Neural Network Hardware Implementations , 1993, IEEE Trans. Computers.

[15]  Tony R. Martinez,et al.  A VLSI implementation of a parallel, self-organizing learning model , 1994, Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 2 - Conference B: Computer Vision & Image Processing. (Cat. No.94CH3440-5).

[16]  Gert Cauwenberghs,et al.  Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines , 2001, NIPS.

[17]  Bernhard Schölkopf,et al.  Support Vector methods in learning and feature extraction , 1998 .

[18]  Gert Cauwenberghs,et al.  Kerneltron: Support Vector 'Machine' in Silicon , 2002, SVM.