Use of multi-objective genetic algorithms to investigate the diversity/accuracy dilemma in heterogeneous ensembles

Classifier ensembles, also known as committees, are systems composed of a set of base classifiers (organized in a parallel way) and a combination module, which is responsible for providing the final output of the system. The main aim of using ensembles is to provide better performance than the individual classifiers. In order to build robust ensembles, it is often required that the base classifiers are as accurate as diverse among themselves - this is known as the diversity/accuracy dilemma. There are, in the literature, some works analyzing the ensemble performance in context of such a dilemma. However, the majority of them address the homogenous structures, i.e., ensembles composed only of the same type of classifiers. Motivated by such a limitation, this paper presents an empirical investigation on the diversity/accuracy dilemma for heterogeneous ensembles. In order to do so, multi-objective genetic algorithms will be used to guide the building of the ensemble systems.

[1]  Derek Partridge,et al.  Hybrid ensembles and coincident-failure diversity , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[2]  Mykola Pechenizkiy,et al.  Diversity in search strategies for ensemble feature selection , 2005, Inf. Fusion.

[3]  Robert Sabourin,et al.  Single and Multi-Objective Genetic Algorithms for the Selection of Ensemble of Classifiers , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[4]  Ludmila I. Kuncheva,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2004 .

[5]  Jing Liu,et al.  A multiagent genetic algorithm for global numerical optimization , 2004, IEEE Trans. Syst. Man Cybern. Part B.

[6]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[7]  Francisco Luna,et al.  jMetal: a Java Framework for Developing Multi-Objective Optimization Metaheuristics , 2006 .

[8]  Giorgio Valentini,et al.  An experimental bias-variance analysis of SVM ensembles based on resampling techniques , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[9]  Terry Windeatt,et al.  Diversity measures for multiple classifier system analysis and design , 2004, Inf. Fusion.

[10]  Lawrence O. Hall,et al.  A New Ensemble Diversity Measure Applied to Thinning Ensembles , 2003, Multiple Classifier Systems.

[11]  Xin Yao,et al.  An analysis of diversity measures , 2006, Machine Learning.

[12]  Oleksandr Makeyev,et al.  Neural network with ensembles , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[13]  Yoshiyasu Takefuji,et al.  Adaptation of neural agent in dynamic environment: hybrid system of genetic algorithm and neural network , 1998, 1998 Second International Conference. Knowledge-Based Intelligent Electronic Systems. Proceedings KES'98 (Cat. No.98EX111).

[14]  Xin Yao,et al.  A constructive algorithm for training cooperative neural network ensembles , 2003, IEEE Trans. Neural Networks.

[15]  Kalyanmoy Deb,et al.  Multi-objective optimization using evolutionary algorithms , 2001, Wiley-Interscience series in systems and optimization.

[16]  Derek Partridge,et al.  Diversity between Neural Networks and Decision Trees for Building Multiple Classifier Systems , 2000, Multiple Classifier Systems.

[17]  Xin Yao,et al.  Evolutionary ensembles with negative correlation learning , 2000, IEEE Trans. Evol. Comput..

[18]  André M. C. Campos,et al.  GNeurAge: An Evolutionary Agent-Based System for Classification Tasks , 2006, 2006 Sixth International Conference on Hybrid Intelligent Systems (HIS'06).

[19]  William B. Langdon,et al.  Combining Decision Trees and Neural Networks for Drug Discovery , 2002, EuroGP.

[20]  Ludmila I. Kuncheva,et al.  Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy , 2003, Machine Learning.

[21]  Kevin W. Bowyer,et al.  Combination of Multiple Classifiers Using Local Accuracy Estimates , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[22]  Matti Aksela,et al.  Comparison of Classifier Selection Methods for Improving Committee Performance , 2003, Multiple Classifier Systems.

[23]  William B. Yates,et al.  Use of methodological diversity to improve neural network generalisation , 2005, Neural Computing & Applications.

[24]  Xin Yao,et al.  Evolutionary framework for the construction of diverse hybrid ensembles , 2005, ESANN.

[25]  Bogdan Gabrys,et al.  Classifier selection for majority voting , 2005, Inf. Fusion.

[26]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[27]  Nitesh V. Chawla,et al.  Evolutionary Ensemble Creation and Thinning , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[28]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[29]  Kevin W. Bowyer,et al.  Combination of multiple classifiers using local accuracy estimates , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[30]  G DietterichThomas An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees , 2000 .