Ambiguity-guided dynamic selection of ensemble of classifiers

Dynamic classifier selection has traditionally focused on selecting the most accurate classifier to predict the class of a particular test pattern. In this paper we propose a new dynamic selection method to select, from a population of ensembles, the most confident ensemble of classifiers to label the test sample. Such a level of confidence is measured by calculating the ambiguity of the ensemble on each test sample. We show theoretically and experimentally that choosing the ensemble of classifiers, from a population of high accurate ensembles, with lowest ambiguity among its members leads to increase the level of confidence of classification, consequently, increasing the generalization performance. Experimental results conducted to compare the proposed method to static selection and DCS-LA, demonstrate that our method outperforms both DCS-LA and static selection strategies when a population of high accurate ensembles is available.

[1]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[2]  Ludmila I. Kuncheva,et al.  Clustering-and-selection model for classifier combination , 2000, KES'2000. Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies. Proceedings (Cat. No.00TH8516).

[3]  R. K. Ursem Multi-objective Optimization using Evolutionary Algorithms , 2009 .

[4]  David W. Corne,et al.  No Free Lunch and Free Leftovers Theorems for Multiobjective Optimisation Problems , 2003, EMO.

[5]  Kevin W. Bowyer,et al.  Combination of multiple classifiers using local accuracy estimates , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  Gian Luca Marcialis,et al.  A study on the performances of dynamic classifier selection based on local accuracy estimation , 2005, Pattern Recognit..

[7]  L. K. Hansen,et al.  The Error-Reject Tradeoff , 1997 .

[8]  H.W. Shin,et al.  Selected tree classifier combination based on both accuracy and error diversity , 2005, Pattern Recognit..

[9]  Robert Sabourin,et al.  Single and Multi-Objective Genetic Algorithms for the Selection of Ensemble of Classifiers , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[10]  Fabio Roli,et al.  Dynamic classifier selection based on multiple classifier behaviour , 2001, Pattern Recognit..

[11]  Bogdan Gabrys,et al.  Classifier selection for majority voting , 2005, Inf. Fusion.

[12]  Ludmila I. Kuncheva,et al.  Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy , 2003, Machine Learning.

[13]  Alberto Maria Segre,et al.  Programs for Machine Learning , 1994 .

[14]  Tin Kam Ho,et al.  The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[16]  Padraig Cunningham,et al.  Using Diversity in Preparing Ensembles of Classifiers Based on Different Feature Subsets to Minimize Generalization Error , 2001, ECML.