Sparse Reductions for Fixed-Size Least Squares Support Vector Machines on Large Scale Data

Fixed-Size Least Squares Support Vector Machines (FS-LSSVM) is a powerful tool for solving large scale classification and regression problems. FS-LSSVM solves an over-determined system of M linear equations by using Nystrom approximations on a set of prototype vectors (PVs) in the primal. This introduces sparsity in the model along with ability to scale for large datasets. But there exists no formal method for selection of the right value of M. In this paper, we investigate the sparsity-error trade-off by introducing a second level of sparsity after performing one iteration of FS-LSSVM. This helps to overcome the problem of selecting a right number of initial PVs as the final model is highly sparse and dependent on only a few appropriately selected prototype vectors (SV) is a subset of the PVs. The first proposed method performs an iterative approximation of L 0-norm which acts as a regularizer. The second method belongs to the category of threshold methods, where we set a window and select the SV set from correctly classified PVs closer and farther from the decision boundaries in the case of classification. For regression, we obtain the SV set by selecting the PVs with least minimum squared error (mse). Experiments on real world datasets from the UCI repository illustrate that highly sparse models are obtained without significant trade-off in error estimations scalable to large scale datasets.

[1]  James R. Schott Eigenprojections and the equality of latent roots of a correlation matrix , 1996 .

[2]  Johan A. K. Suykens,et al.  Sparse approximation using least squares support vector machines , 2000, 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No.00CH36353).

[3]  Weidong Zhang,et al.  Improved sparse least-squares support vector machine classifiers , 2006, Neurocomputing.

[4]  Johan A. K. Suykens,et al.  Reducing the Number of Support Vectors of SVM Classifiers Using the Smoothed Separable Case Approximation , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[5]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[6]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machines , 2002 .

[7]  Matthias W. Seeger,et al.  Using the Nyström Method to Speed Up Kernel Machines , 2000, NIPS.

[8]  Johan A. K. Suykens,et al.  Sparse LS-SVMs with L0 - norm minimization , 2011, ESANN.

[9]  Johan A. K. Suykens,et al.  Optimized fixed-size kernel models for large data sets , 2010, Comput. Stat. Data Anal..

[10]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machine Classifiers , 1999, Neural Processing Letters.

[11]  Yoshinobu Hotta,et al.  Sparse learning for support vector classification , 2010, Pattern Recognit. Lett..

[12]  Gavin C. Cawley,et al.  Improved sparse least-squares support vector machines , 2002, Neurocomputing.

[13]  Johan A. K. Suykens,et al.  A Comparison of Pruning Algorithms for Sparse Least Squares Support Vector Machines , 2004, ICONIP.

[14]  Johan A. K. Suykens,et al.  Sparse conjugate directions pursuit with application to fixed-size kernel models , 2011, Machine Learning.

[15]  Johan A. K. Suykens,et al.  Coupled Simulated Annealing , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[16]  R. Taylor,et al.  The Numerical Treatment of Integral Equations , 1978 .

[17]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machines , 2002 .

[18]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[19]  E. Nyström Über Die Praktische Auflösung von Integralgleichungen mit Anwendungen auf Randwertaufgaben , 1930 .

[20]  Bernhard Schölkopf,et al.  Use of the Zero-Norm with Linear Models and Kernel Methods , 2003, J. Mach. Learn. Res..

[21]  Stephan R. Sain,et al.  Multi-dimensional Density Estimation , 2004 .