Analyzing Dynamic Ensemble Selection Techniques Using Dissimilarity Analysis

In Dynamic Ensemble Selection (DES), only the most competent classifiers are selected to classify a given query sample. A crucial issue faced in DES is the definition of a criterion for measuring the level of competence of each base classifier. To that end, a criterion commonly used is the estimation of the competence of a base classifier using its local accuracy in small regions of the feature space surrounding the query instance. However, such a criterion cannot achieve results close to the performance of the Oracle, which is the upper limit performance of any DES technique. In this paper, we conduct a dissimilarity analysis between various DES techniques in order to better understand the relationship between them and as well as the behavior of the Oracle. In our experimental study, we evaluate seven DES techniques and the Oracle using eleven public datasets. One of the seven DES techniques was proposed by the authors and uses meta-learning to define the competence of base classifiers based on different criteria. In the dissimilarity analysis, this proposed technique appears closer to the Oracle when compared to others, which would seem to indicate that using different bits of information on the behavior of base classifiers is important for improving the precision of DES techniques. Furthermore, DES techniques, such as LCA, OLA, and MLA, which use similar criteria to define the level of competence of base classifiers, are more likely to produce similar results.

[1]  Paul C. Smits,et al.  Multiple classifier systems for supervised remote sensing image classification based on dynamic classifier selection , 2002, IEEE Trans. Geosci. Remote. Sens..

[2]  Luiz Eduardo Soares de Oliveira,et al.  Dynamic selection of classifiers - A comprehensive review , 2014, Pattern Recognit..

[3]  Jon Atli Benediktsson,et al.  Multiple Classifier Systems , 2015, Lecture Notes in Computer Science.

[4]  Kevin W. Bowyer,et al.  Combination of multiple classifiers using local accuracy estimates , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[5]  Robert P. W. Duin,et al.  A Discussion on the Classifier Projection Space for Classifier Combining , 2002, Multiple Classifier Systems.

[6]  Trevor F. Cox,et al.  Multidimensional Scaling, Second Edition , 2000 .

[7]  Fabio Roli,et al.  Dynamic classifier selection based on multiple classifier behaviour , 2001, Pattern Recognit..

[8]  Robert Sabourin,et al.  From dynamic classifier selection to dynamic ensemble selection , 2008, Pattern Recognit..

[9]  Robert Sabourin,et al.  Dynamic selection approaches for multiple classifier systems , 2011, Neural Computing and Applications.

[10]  George D. C. Cavalcanti,et al.  Handwritten Digit Recognition Using Multiple Feature Extraction Techniques and Classifier Ensemble , 2010 .

[11]  Ludmila I. Kuncheva,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2004 .

[12]  Subhash C. Bagui,et al.  Combining Pattern Classifiers: Methods and Algorithms , 2005, Technometrics.

[13]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[14]  George D. C. Cavalcanti,et al.  Feature representation selection based on Classifier Projection Space and Oracle analysis , 2013, Expert Syst. Appl..

[15]  Luís A. Alexandre,et al.  On combining classifiers using sum and product rules , 2001, Pattern Recognit. Lett..

[16]  Anne Lohrli Chapman and Hall , 1985 .

[17]  George D. C. Cavalcanti,et al.  A method for dynamic ensemble selection based on a filter and an adaptive distance to improve the quality of the regions of competence , 2011, IJCNN.

[18]  George D. C. Cavalcanti,et al.  On Meta-learning for Dynamic Ensemble Selection , 2014, 2014 22nd International Conference on Pattern Recognition.