Ensemble Feature Selection with Dynamic Integration of Classifiers

Recent research has proved the benefits of the use of ensembles of classifiers for classification problems. Ensembles of classifiers can be constructed by a number of methods manipulating the training set with the purpose of creating a set of diverse and accurate base classifiers. One way to manipulate the training set for construction of the base classifiers is to apply feature selection. In this paper we evaluate the contextual merit measure as a feature selection heuristic for ensemble construction with different strategies for ensemble integration. We analyze and experiment with five different ensemble integration strategies with an emphasis on the dynamic integration. The dynamic integration of classifiers is based on the assumption that each base classifier is best inside certain subareas of the whole instance space. We compare the dynamic integration with the static integration in ensemble feature selection. In the experiments, the dynamic integration shows significantly better results on average than such static integration approaches as crossvalidation majority and weighted voting. We analyze also the dependence of the ensemble accuracy on the number of neighboring instances taken into account in dynamic integration and on the use of cross validation for evaluation of the base classifiers.