Fighting the disagreement in Explainable Machine Learning with consensus

Machine learning (ML) models are often valued by the accuracy of their predictions. However, in some areas of science, the inner workings of models are as relevant as their accuracy. To understand how ML models work internally, the use of interpretability algorithms is the preferred option. Unfortunately, despite the diversity of algorithms available, they often disagree in explaining a model, leading to contradictory explanations. To cope with this issue, consensus functions can be applied once the models have been explained. Nevertheless, the problem is not completely solved because the final result will depend on the selected consensus function and other factors. In this paper, six consensus functions have been evaluated for the explanation of five ML models. The models were previously trained on four synthetic datasets whose internal rules were known in advance. The models were then explained with model-agnostic local and global interpretability algorithms. Finally, consensus was calculated with six different functions, including one developed by the authors. The results demonstrated that the proposed function is fairer than the others and provides more consistent and accurate explanations.

[1]  Montgomery L. Flora,et al.  Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement , 2022, ArXiv.

[2]  S. Jabbari,et al.  The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective , 2022, ArXiv.

[3]  Xiong Liu,et al.  A machine learning model to estimate ground-level ozone concentrations in California using TROPOMI data and high-resolution meteorology. , 2021, Environment international.

[4]  C. Rudin,et al.  Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges , 2021, Statistics Surveys.

[5]  Jianlong Zhou,et al.  Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics , 2021, Electronics.

[6]  Sotiris Kotsiantis,et al.  Explainable AI: A Review of Machine Learning Interpretability Methods , 2020, Entropy.

[7]  Simon Gaisford,et al.  Advanced machine-learning techniques in drug discovery. , 2020, Drug discovery today.

[8]  M. Haeffelin,et al.  Meteorology-driven variability of air pollution (PM1) revealed with explainable machine learning , 2020, Atmospheric Chemistry and Physics.

[9]  Witold Pedrycz,et al.  A survey on machine learning for data fusion , 2020, Inf. Fusion.

[10]  Zhikui Chen,et al.  A Survey on Deep Learning for Multimodal Data Fusion , 2020, Neural Computation.

[11]  Yunfeng Zhang,et al.  Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.

[12]  J. Cherrie,et al.  Machine Learning and Deep Learning , 2019, International Journal of Innovative Technology and Exploring Engineering.

[13]  David T Jones,et al.  Setting the standards for machine learning in biology , 2019, Nature Reviews Molecular Cell Biology.

[14]  Maya Krishnan,et al.  Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning , 2019, Philosophy & Technology.

[15]  José M. F. Moura,et al.  Building Human-Machine Trust via Interpretability , 2019, AAAI.

[16]  Amit Sharma,et al.  Explaining machine learning classifiers through diverse counterfactual explanations , 2019, FAT*.

[17]  Xiangrong Liu,et al.  Application of Machine Learning in Microbiology , 2019, Front. Microbiol..

[18]  Sean Ekins,et al.  Exploiting machine learning for end-to-end drug discovery and development , 2019, Nature Materials.

[19]  Parantu K. Shah,et al.  Applications of machine learning in drug discovery and development , 2019, Nature Reviews Drug Discovery.

[20]  Jenni A. M. Sidey-Gibbons,et al.  Machine learning in medicine: a practical introduction , 2019, BMC Medical Research Methodology.

[21]  Alex John London,et al.  Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. , 2019, The Hastings Center report.

[22]  Lalana Kagal,et al.  Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).

[23]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[24]  Ankur Taly,et al.  Axiomatic Attribution for Deep Networks , 2017, ICML.

[25]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[26]  K. Borgwardt,et al.  Machine Learning in Medicine , 2015, Mach. Learn. under Resour. Constraints Vol. 3.

[27]  Erik Strumbelj,et al.  Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.

[28]  Akbar K Waljee,et al.  Machine Learning in Medicine: A Primer for Physicians , 2010, The American Journal of Gastroenterology.

[29]  Bart De Moor,et al.  Kernel-based Data Fusion for Machine Learning - Methods and Applications in Bioinformatics and Text Mining , 2009, Studies in Computational Intelligence.

[30]  Bogdan E. Popescu,et al.  PREDICTIVE LEARNING VIA RULE ENSEMBLES , 2008, 0811.1679.

[31]  L. Breiman Random Forests , 2001, Encyclopedia of Machine Learning and Data Mining.

[32]  William W. Cohen Fast Effective Rule Induction , 1995, ICML.

[33]  W. Ferger The Nature and Use of the Harmonic Mean , 1931 .

[34]  Anirban Nandi,et al.  Interpreting Machine Learning Models: Learn Model Interpretability and Explainability Methods , 2022 .

[35]  Antonio J Banegas-Luna,et al.  SIBILA: High-performance computing and interpretable machine learning join efforts toward personalised medicine in a novel decision-making tool , 2022, ArXiv.

[36]  Jason Kelley,et al.  Using Machine Learning to Integrate On-Farm Sensors and Agro-Meteorology Networks into Site-Specific Decision Support , 2020, Transactions of the ASABE.

[37]  Brandon M. Greenwell,et al.  Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.

[38]  Thomas Lengauer,et al.  Permutation importance: a corrected feature importance measure , 2010, Bioinform..