A training algorithm for sparse LS-SVM using Compressive Sampling

Least Squares Support Vector Machine (LS-SVM) has become a fundamental tool in pattern recognition and machine learning. However, the main disadvantage is lack of sparseness of solutions. In this article Compressive Sampling (CS), which addresses the sparse signal representation, is employed to find the support vectors of LS-SVM. The main difference between our work and the existing techniques is that the proposed method can locate the sparse topology while training. In contrast, most of the traditional methods need to train the model before finding the sparse support vectors. An experimental comparison with the standard LS-SVM and existing algorithms is given for function approximation and classification problems. The results show that the proposed method achieves comparable performance with typically a much sparser model.

[1]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machines , 2002 .

[2]  Weidong Zhang,et al.  Improved sparse least-squares support vector machine classifiers , 2006, Neurocomputing.

[3]  Theo J. A. de Vries,et al.  Pruning error minimization in least squares support vector machines , 2003, IEEE Trans. Neural Networks.

[4]  Johan A. K. Suykens,et al.  Least squares support vector machines for classification and nonlinear modelling , 2000 .

[5]  Johan A. K. Suykens,et al.  A Comparison of Pruning Algorithms for Sparse Least Squares Support Vector Machines , 2004, ICONIP.

[6]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machine Classifiers , 1999, Neural Processing Letters.

[7]  Bhaskar D. Rao,et al.  Sparse solutions to linear inverse problems with multiple measurement vectors , 2005, IEEE Transactions on Signal Processing.

[8]  Johan A. K. Suykens,et al.  Sparse approximation using least squares support vector machines , 2000, 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No.00CH36353).

[9]  Pierre Vandergheynst,et al.  On the exponential convergence of matching pursuits in quasi-incoherent dictionaries , 2006, IEEE Transactions on Information Theory.

[10]  Anthony Kuh,et al.  Comments on "Pruning Error Minimization in Least Squares Support Vector Machines" , 2007, IEEE Trans. Neural Networks.

[11]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[12]  Joel A. Tropp,et al.  Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.

[13]  Pierre Vandergheynst,et al.  Compressed Sensing and Redundant Dictionaries , 2007, IEEE Transactions on Information Theory.