Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions

The development of theory, frameworks and tools for Explainable AI (XAI) is a very active area of research these days, and articulating any kind of coherence on a vision and challenges is itself a challenge. At least two sometimes complementary and colliding threads have emerged. The first focuses on the development of pragmatic tools for increasing the transparency of automatically learned prediction models, as for instance by deep or reinforcement learning. The second is aimed at anticipating the negative impact of opaque models with the desire to regulate or control impactful consequences of incorrect predictions, especially in sensitive areas like medicine and law. The formulation of methods to augment the construction of predictive models with domain knowledge can provide support for producing human understandable explanations for predictions. This runs in parallel with AI regulatory concerns, like the European Union General Data Protection Regulation, which sets standards for the production of explanations from automated or semi-automated decision making. Despite the fact that all this research activity is the growing acknowledgement that the topic of explainability is essential, it is important to recall that it is also among the oldest fields of computer science. In fact, early AI was re-traceable, interpretable, thus understandable by and explainable to humans. The goal of this research is to articulate the big picture ideas and their role in advancing the development of XAI systems, to acknowledge their historical roots, and to emphasise the biggest challenges to moving forward.

[1]  Vitaly Shmatikov,et al.  Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[2]  Luca Longo,et al.  A Comparative Study of Defeasible Argumentation and Non-monotonic Fuzzy Reasoning for Elderly Survival Prediction Using Biomarkers , 2018, AI*IA.

[3]  Tim Miller,et al.  Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.

[4]  Paul N. Bennett,et al.  Guidelines for Human-AI Interaction , 2019, CHI.

[5]  Georg Langs,et al.  Causability and explainability of artificial intelligence in medicine , 2019, WIREs Data Mining Knowl. Discov..

[6]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[7]  Andreas Holzinger,et al.  Towards a Deeper Understanding of How a Pathologist Makes a Diagnosis: Visualization of the Diagnostic Process in Histopathology , 2019, 2019 IEEE Symposium on Computers and Communications (ISCC).

[8]  Luca Longo,et al.  Inferential Models of Mental Workload with Defeasible Argumentation and Non-monotonic Fuzzy Reasoning: a Comparative Study , 2018, AI³@AI*IA.

[9]  Freddy Lécué,et al.  Feeding Machine Learning with Knowledge Graphs for Explainable Object Detection , 2019, SEMWEB.

[10]  J. Hendler,et al.  Amplify scientific discovery with artificial intelligence , 2014, Science.

[11]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[12]  Dietmar Jannach,et al.  A systematic review and taxonomy of explanations in decision support and recommender systems , 2017, User Modeling and User-Adapted Interaction.

[13]  Taehyun Ha,et al.  Designing Explainability of an Artificial Intelligence System , 2018, APAScience.

[14]  Freddy Lécué,et al.  On The Role of Knowledge Graphs in Explainable AI , 2020, PROFILES/SEMEX@ISWC.

[15]  Luca Longo,et al.  Defeasible Reasoning and Argument-Based Systems in Medical Fields: An Informal Overview , 2014, 2014 IEEE 27th International Symposium on Computer-Based Medical Systems.

[16]  Alun D. Preece,et al.  Asking 'Why' in AI: Explainability of intelligent systems - perspectives and challenges , 2018, Intell. Syst. Account. Finance Manag..

[17]  Osmar R. Zaïane,et al.  A multi-component framework for the analysis and design of explainable artificial intelligence , 2020, Mach. Learn. Knowl. Extr..

[18]  Harry E. Pople,et al.  Session 6 Theorem Proving and Logic: I I ON THE MECHANIZATION OF ABDUCTIVE LOGIC , 2006 .

[19]  Luca Longo,et al.  Argumentation Theory for Decision Support in Health-Care: A Comparison with Machine Learning , 2013, Brain and Health Informatics.

[20]  Spyros Makridakis,et al.  The Forthcoming Artificial Intelligence (AI) Revolution: Its Impact on Society and Firms , 2017 .

[21]  Luca Longo,et al.  Argumentation for Knowledge Representation, Conflict Resolution, Defeasible Inference and Its Integration with Machine Learning , 2016, Machine Learning for Health Informatics.

[22]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[23]  Edgar R. Weippl,et al.  The Right to Be Forgotten: Towards Machine Learning on Perturbed Knowledge Bases , 2016, CD-ARES.

[24]  Øyvind Smogeli,et al.  Trustworthy versus Explainable AI in Autonomous Vessels , 2020, Proceedings of the International Seminar on Safety and Security of Autonomous Vessels (ISSAV) and European STAMP Workshop and Conference (ESWC) 2019.

[25]  M. Resnik,et al.  Aspects of Scientific Explanation. , 1966 .

[26]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[27]  Luca Longo,et al.  Explainable Artificial Intelligence: a Systematic Review , 2020, ArXiv.

[28]  Klaus-Robert Müller,et al.  Evaluating Recurrent Neural Network Explanations , 2019, BlackboxNLP@ACL.

[29]  Freddy Lécué,et al.  Explainable AI: The New 42? , 2018, CD-MAKE.

[30]  Marie-Jeanne Lesot,et al.  The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations , 2019, IJCAI.

[31]  Peter Kieseberg,et al.  Humans forget, machines remember: Artificial intelligence and the Right to Be Forgotten , 2017, Comput. Law Secur. Rev..

[32]  Andreas Holzinger,et al.  Usability engineering methods for software developers , 2005, CACM.

[33]  Quanshi Zhang,et al.  Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.

[34]  Zachary C. Lipton,et al.  The mythos of model interpretability , 2018, Commun. ACM.

[35]  Ryen W. White Opportunities and challenges in search interaction , 2018, Commun. ACM.

[36]  Jun Sakuma,et al.  Fairness-Aware Classifier with Prejudice Remover Regularizer , 2012, ECML/PKDD.

[37]  Andreas Holzinger,et al.  Measuring the Quality of Explanations: The System Causability Scale (SCS) , 2020, KI - Künstliche Intelligenz.

[38]  Andreas Holzinger,et al.  Can we Trust Machine Learning Results? Artificial Intelligence in Safety-Critical Decision Support , 2018, ERCIM News.

[39]  Vannevar Bush,et al.  As we may think , 1945, INTR.

[40]  Randy Goebel,et al.  Theorist: A Logical Reasoning System for Defaults and Diagnosis , 1987 .

[41]  Nathaniel D. Bastian,et al.  Intelligent Systems Design for Malware Classification Under Adversarial Conditions , 2019, ArXiv.

[42]  Luca Longo,et al.  Argumentation theory in health care , 2012, 2012 25th IEEE International Symposium on Computer-Based Medical Systems (CBMS).

[43]  Andreas Holzinger,et al.  DO NOT DISTURB? Classifier Behavior on Perturbed Datasets , 2017, CD-MAKE.

[44]  Judith Masthoff,et al.  A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.

[45]  J. Pearl Causality: Models, Reasoning and Inference , 2000 .

[46]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[47]  Federico Bianchi,et al.  Knowledge Graph Embeddings and Explainable AI , 2020, Knowledge Graphs for eXplainable Artificial Intelligence.

[48]  Oluwasanmi Koyejo,et al.  Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.

[49]  Yingshu Li,et al.  Collective Data-Sanitization for Preventing Sensitive Information Inference Attacks in Social Networks , 2018, IEEE Transactions on Dependable and Secure Computing.

[50]  Franco Turini,et al.  A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..

[51]  Randy Goebel,et al.  An Introduction to Deep Visual Explanation , 2017, ArXiv.

[52]  Luciano Floridi,et al.  Transparent, explainable, and accountable AI for robotics , 2017, Science Robotics.

[53]  Edgar R. Weippl,et al.  A tamper-proof audit and control system for the doctor in the loop , 2016, Brain Informatics.

[54]  Luca Longo,et al.  An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems , 2020, Expert Syst. Appl..

[55]  C. Hempel The Function of General Laws in History , 1942 .

[56]  M. Sheelagh T. Carpendale,et al.  Empirical Studies in Information Visualization: Seven Scenarios , 2012, IEEE Transactions on Visualization and Computer Graphics.

[57]  Patrick Kochberger,et al.  Behavioural Comparison of Systems for Anomaly Detection , 2018, ARES.

[58]  Mariarosaria Taddeo The Civic Role of Online Service Providers , 2019, Minds and Machines.

[59]  Luca Longo,et al.  A Qualitative Investigation of the Explainability of Defeasible Argumentation and Non-Monotonic Fuzzy Reasoning , 2018, AICS.

[60]  Cynthia Rudin,et al.  Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains its Predictions , 2017, AAAI.

[61]  Andrés Páez,et al.  The Pragmatic Turn in Explainable Artificial Intelligence (XAI) , 2019, Minds and Machines.

[62]  Richard Evans,et al.  Learning Explanatory Rules from Noisy Data , 2017, J. Artif. Intell. Res..

[63]  Joseph Weiss,et al.  Ethical Implications of Bias in Machine Learning , 2018, HICSS.

[64]  C. Hempel,et al.  Studies in the Logic of Explanation , 1948, Philosophy of Science.

[65]  Michael Glassman,et al.  Intelligence in the internet age: The emergence and evolution of Open Source Intelligence (OSINT) , 2012, Comput. Hum. Behav..