In order to improve the performance of the base classifier in the process of AdaBoost algorithm and simplify the complexity of the whole ensemble learning system, this paper presents a SVM ensemble method based on an improved iteration process of Adaboost algorithm. The improved Adaboost algorithm is added with methods of adding sample selection and feature selection in its iterative process in order to solve the problem that Adaboost is susceptible to noise and has long training time. First of all, the samples subsets are selected by means of mean nearest neighbor algorithm. Secondly, the feature subset are obtained using the method of relative entropy. Lastly, the individual SVM classifiers are trained by the resulting optimal feature samples subset in each cycle and combined via majority vote to generate the final decision system. The simulation results of UCI datasets show that this algorithm can achieve a higher recognition accuracy on the basis of fewer samples and features compared with the traditional Adaboost support vector machine ensemble algorithm.
[1]
Yoav Freund,et al.
Experiments with a New Boosting Algorithm
,
1996,
ICML.
[2]
Liang Sun.
Generalized Rough Set Method for Ensemble Feature Selection and Multiple Classifier Fusion: Generalized Rough Set Method for Ensemble Feature Selection and Multiple Classifier Fusion
,
2009
.
[3]
Han Chongzhao,et al.
Generalized Rough Set Method for Ensemble Feature Selection and Multiple Classifier Fusion
,
2008
.
[4]
Yang Chang.
Pairwise Diversity Measures Based Selective Ensemble Method
,
2010
.
[5]
Ying Cao,et al.
Advance and Prospects of AdaBoost Algorithm
,
2013,
ACTA AUTOMATICA SINICA.
[6]
Thomas G. Dietterich.
Machine-Learning Research Four Current Directions
,
1997
.