A sparsity-based training algorithm for Least Squares SVM

We address the training problem of the sparse Least Squares Support Vector Machines (SVM) using compressed sensing. The proposed algorithm regards the support vectors as a dictionary and selects the important ones that minimize the residual output error iteratively. A measurement matrix is also introduced to reduce the computational cost. The main advantage is that the proposed algorithm performs model training and support vector selection simultaneously. The performance of the proposed algorithm is tested with several benchmark classification problems in terms of number of selected support vectors and size of the measurement matrix. Simulation results show that the proposed algorithm performs competitively when compared to existing methods.

[1]  Johan A. K. Suykens,et al.  Sparse approximation using least squares support vector machines , 2000, 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No.00CH36353).

[2]  Lou Hao,et al.  Data compression based on compressed sensing and wavelet transform , 2010, 2010 3rd International Conference on Computer Science and Information Technology.

[3]  Wei Chu,et al.  An improved conjugate gradient scheme to the solution of least squares SVM , 2005, IEEE Transactions on Neural Networks.

[4]  Xiaomei Zhang,et al.  An efficient computational model for LS-SVM and its applications in time series prediction , 2010, 2010 5th International Conference on Computer Science & Education.

[5]  Abdesselam Bouzerdoum,et al.  A training algorithm for sparse LS-SVM using Compressive Sampling , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[6]  Michael Elad,et al.  Optimized Projections for Compressed Sensing , 2007, IEEE Transactions on Signal Processing.

[7]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[8]  Joel A. Tropp,et al.  Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.

[9]  Peng Wang,et al.  Improved LS-SVM based classifier design and its application , 2012, Proceedings of the 10th World Congress on Intelligent Control and Automation.

[10]  Johan A. K. Suykens,et al.  A Comparison of Pruning Algorithms for Sparse Least Squares Support Vector Machines , 2004, ICONIP.

[11]  Thorsten Joachims,et al.  Making large-scale support vector machine learning practical , 1999 .

[12]  Daniel S. Yeung,et al.  Sparse LS-SVM two-steps model selection method , 2012, 2012 International Conference on Machine Learning and Cybernetics.

[13]  R.G. Baraniuk,et al.  Compressive Sensing [Lecture Notes] , 2007, IEEE Signal Processing Magazine.

[14]  Jie Chen,et al.  Theoretical Results on Sparse Representations of Multiple-Measurement Vectors , 2006, IEEE Transactions on Signal Processing.

[15]  R. Shah,et al.  Least Squares Support Vector Machines , 2022 .

[16]  Johan A. K. Suykens,et al.  Least squares support vector machines for classification and nonlinear modelling , 2000 .

[17]  Theo J. A. de Vries,et al.  Pruning error minimization in least squares support vector machines , 2003, IEEE Trans. Neural Networks.

[18]  E.J. Candes,et al.  An Introduction To Compressive Sampling , 2008, IEEE Signal Processing Magazine.

[19]  Bhaskar D. Rao,et al.  Subset selection in noise based on diversity measure minimization , 2003, IEEE Trans. Signal Process..

[20]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[21]  Weidong Zhang,et al.  Improved sparse least-squares support vector machine classifiers , 2006, Neurocomputing.