Ensemble Implementations on Diversified Support Vector Machines

Support vector machine (SVM) is an effective algorithm in pattern recognition. But usually, standard SVM requires solving a quadratic program (QP) problem. In majority situations, most implementations of SVM are approximate solution to the QP problem. As the approximate solutions cannot achieve the expected performance of SRM theory, it is necessary to research ensemble methods for SVM. Recently, in order to augment the diversities of individual classifiers of SVM, many researchers use random partition with the whole training to form sub-training sets. Therefore the performance of aggregated SVM, which was trained on those subsets, was improved. We proposed the ensemble method based on different implementations of SVM, because they have large diversities by their different implementing methods. The experiment results showed that this method is effectively to improve the aggregated learner's performance.

[1]  Anil K. Jain,et al.  Statistical Pattern Recognition: A Review , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Hyun-Chul Kim,et al.  Constructing support vector machine ensemble , 2003, Pattern Recognit..

[3]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[4]  Johan A. K. Suykens,et al.  Least squares support vector machine classifiers: a large scale algorithm , 1999 .

[5]  Glenn Fung,et al.  Proximal support vector machine classifiers , 2001, KDD '01.

[6]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[7]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.