Generation and Evaluation of Factual and Gounterfaetual Explanations for Decision Trees and Fuzzy Rule-based Classifiers

Data-driven classification algorithms have proven highly effective in a range of complex tasks. However, their output is sometimes questioned, as the reasoning behind it may remain unclear due to a high number of poorly interpretable parameters used during training. Evidence-based (factual) explanations for single classifications answer the question why a particular class is selected in terms of the given observations. On the contrary, counterfactual explanations pay attention to why the rest of classes are not selected. Accordingly, we hypothesize that providing classifiers with a combination of both factual and counterfactual explanations is likely to make them more trustworthy. In order to investigate how such explanations can be produced, we introduce a new method to generate factual and counterfactual explanations for the output of pretrained decision trees and fuzzy rule-based classifiers. Experimental results show that unification of factual and counterfactual explanations under the paradigm of fuzzy inference systems proves promising for explaining the reasoning of classification algorithms.

[1]  J. M. Soto-Hidalgo,et al.  JFML: A Java Library to Design Fuzzy Logic Systems According to the IEEE Std 1855-2016 , 2018, IEEE Access.

[2]  Franco Turini,et al.  Factual and Counterfactual Explanations for Black Box Decision Making , 2019, IEEE Intelligent Systems.

[3]  Jürgen Ziegler,et al.  Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems , 2019, CHI.

[4]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[5]  Mark A. Neerincx,et al.  Contrastive Explanations with Local Foil Trees , 2018, ICML 2018.

[6]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[7]  Himani Sharma,et al.  A Survey on Decision Tree Algorithms of Classification in Data Mining , 2016 .

[8]  Matthew Deaves,et al.  General Data Protection Regulation (GDPR) , 2017 .

[9]  José M. Alonso,et al.  A Bibliometric Analysis of the Explainable Artificial Intelligence Research Field , 2018, IPMU.

[10]  Amit Sharma,et al.  Explaining machine learning classifiers through diverse counterfactual explanations , 2020, FAT*.

[11]  Cynthia Rudin,et al.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.

[12]  Roykrong Sukkerd,et al.  Interactive explanation for planning-based systems: WIP abstract , 2019, ICCPS.

[13]  Xizhao Wang,et al.  On the optimization of fuzzy decision trees , 2000, Fuzzy Sets Syst..

[14]  Peter A. Flach,et al.  Glass-Box: Explaining AI Decisions With Counterfactual Statements Through Conversation With a Voice-enabled Virtual Assistant , 2018, IJCAI.

[15]  Albert Gatt,et al.  SimpleNLG: A Realisation Engine for Practical Applications , 2009, ENLG.

[16]  Donald E. Knuth,et al.  The Art of Computer Programming, Volume I: Fundamental Algorithms, 2nd Edition , 1997 .

[17]  Wolfgang Bibel,et al.  Artificial Intelligence in Europe , 1984, AIMSA.

[18]  Peter A. Flach,et al.  Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements , 2018, IJCAI.

[19]  Senén Barro,et al.  Do we need hundreds of classifiers to solve real world classification problems? , 2014, J. Mach. Learn. Res..

[20]  José M. Alonso,et al.  Paving the Way to Explainable Artificial Intelligence with Fuzzy Modeling - Tutorial , 2018, WILF.

[21]  José M. Alonso,et al.  Interpretability of Fuzzy Systems: Current Research Trends and Prospects , 2015, Handbook of Computational Intelligence.

[22]  J. Murphy The General Data Protection Regulation (GDPR) , 2018, Irish medical journal.

[23]  José M. Alonso,et al.  Py4JFML: A Python wrapper for using the IEEE Std 1855-2016 through JFML , 2019, 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).

[24]  Giovanna Castellano,et al.  The FISDeT software: Application to beer style classification , 2017, 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).

[25]  José M. Alonso,et al.  An exploratory study on the benefits of using natural language for explaining fuzzy rule-based systems , 2017, 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).

[26]  Mark A. Neerincx,et al.  Using Perceptual and Cognitive Explanations for Enhanced Human-Agent Team Performance , 2018, HCI.

[27]  Ronald R. Yager,et al.  Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations , 2018, Communications in Computer and Information Science.

[28]  Foster J. Provost,et al.  Explaining Data-Driven Document Classifications , 2013, MIS Q..

[29]  David P. Pancho,et al.  Quest for Interpretability-Accuracy Trade-off Supported by Fingrams into the Fuzzy Modeling Tool GUAJE , 2013, Int. J. Comput. Intell. Syst..

[30]  Chris Russell,et al.  Efficient Search for Diverse Coherent Explanations , 2019, FAT.

[31]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[32]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[33]  Lotfi A. Zadeh,et al.  Outline of a New Approach to the Analysis of Complex Systems and Decision Processes , 1973, IEEE Trans. Syst. Man Cybern..