Semantic Explanations in Ensemble Learning

A combination method is an integral part of an ensemble classifier. Existing combination methods determine the combined prediction of a new instance by relying on the predictions made by the majority of base classifiers. This can result in incorrect combined predictions when the majority predict the incorrect class. It has been noted that in group decision-making, the decision by the majority, if lacking consistency in the reasons for the decision provided by its members, could be less reliable than the minority’s decision with higher consistency in the reasons of its members. Based on this observation, in this paper, we propose a new combination method, EBCM, which considers the consistency of the features, i.e. explanations of individual predictions for generating ensemble classifiers. EBCM firstly identifies the features accountable for each base classifier’s prediction, and then uses the features to measure the consistency among the predictions. Finally, EBCM combines the predictions based on both the majority and the consistency of features. We evaluated the performance of EBCM with 16 real-world datasets and observed substantial improvement over existing techniques.

[1]  James C. Bezdek,et al.  Decision templates for multiple classifier fusion: an experimental comparison , 2001, Pattern Recognit..

[2]  Hui-Ming Wang,et al.  Two simple and effective ensemble classifiers for Twitter sentiment analysis , 2017, 2017 Computing Conference.

[3]  R. Polikar,et al.  Ensemble based systems in decision making , 2006, IEEE Circuits and Systems Magazine.

[4]  P. Rousseeuw Silhouettes: a graphical aid to the interpretation and validation of cluster analysis , 1987 .

[5]  Hakan Cevikalp,et al.  Local Classifier Weighting by Quadratic Programming , 2008, IEEE Transactions on Neural Networks.

[6]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[7]  David H. Wolpert,et al.  Stacked generalization , 1992, Neural Networks.

[8]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[9]  Motoaki Kawanabe,et al.  How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..

[10]  L. Breiman Stacked Regressions , 1996, Machine Learning.

[11]  Chris Tofallis,et al.  Add or Multiply? A Tutorial on Ranking and Choosing with Multiple Criteria , 2014, INFORMS Trans. Educ..

[12]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.

[13]  Francisco Herrera,et al.  A rational consensus model in group decision making using linguistic assessments , 1997, Fuzzy Sets Syst..

[14]  Larry S. Yaeger,et al.  Sentiment Mining Using Ensemble Classification Models , 2008, SCSS.

[15]  Francisco Herrera,et al.  A study of statistical techniques and performance measures for genetics-based machine learning: accuracy and interpretability , 2009, Soft Comput..

[16]  Anil K. Jain,et al.  Statistical Pattern Recognition: A Review , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Bo Pang,et al.  Thumbs up? Sentiment Classification using Machine Learning Techniques , 2002, EMNLP.

[18]  Aytug Onan,et al.  A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification , 2016, Expert Syst. Appl..

[19]  Ching Y. Suen,et al.  The behavior-knowledge space method for combination of multiple classifiers , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Ludmila I. Kuncheva,et al.  A Theoretical Study on Six Classifier Fusion Strategies , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[22]  Emilio Corchado,et al.  A survey of multiple classifier systems as hybrid systems , 2014, Inf. Fusion.

[23]  Ioannis Hatzilygeroudis,et al.  Recognizing emotions in text using ensemble of classifiers , 2016, Eng. Appl. Artif. Intell..