Generating contrastive explanations for inductive logic programming based on a near miss approach

In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (Learning structural descriptions from examples, 1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with Inductive Logic Programming (GeNME). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate GeNME with the well-known family domain consisting of kinship relations, the visual relational Winston arches domain, and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.

[1]  D. Gentner,et al.  Learning and Transfer: A General Role for Analogical Encoding , 2003 .

[2]  Michael Siebers,et al.  Please delete that! Why should I? , 2018, KI - Künstliche Intelligenz.

[3]  Ute Schmid,et al.  Interactive Learning with Mutual Explanations in Relational Domains , 2021, Human-Like Machine Intelligence.

[4]  Andrew McCallum,et al.  Introduction to Statistical Relational Learning , 2007 .

[5]  Charu C. Aggarwal,et al.  Efficient Data Representation by Selecting Prototypes with Importance Weights , 2017, 2019 IEEE International Conference on Data Mining (ICDM).

[6]  Klaus-Robert Müller,et al.  Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.

[7]  Brandon M. Greenwell,et al.  Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.

[8]  Eleanor Rosch,et al.  Principles of Categorization , 1978 .

[9]  Wojciech Samek,et al.  Explainable AI: Interpreting, Explaining and Visualizing Deep Learning , 2019, Explainable AI.

[10]  Oluwasanmi Koyejo,et al.  Examples are not enough, learn to criticize! Criticism for Interpretability , 2016, NIPS.

[11]  Leon Sterling,et al.  The Art of Prolog - Advanced Programming Techniques , 1986 .

[12]  Francisco Herrera,et al.  Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.

[13]  José Hernández-Orallo,et al.  The teaching size: computable teachers and learners for universal languages , 2019, Machine Learning.

[14]  L. Thurstone A law of comparative judgment. , 1994 .

[15]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[16]  M J Sternberg,et al.  Structure-activity relationships derived by machine learning: the use of atoms and their bond connectivities to predict mutagenicity by inductive logic programming. , 1996, Proceedings of the National Academy of Sciences of the United States of America.

[17]  R. Tibshirani,et al.  Prototype selection for interpretable classification , 2011, 1202.5933.

[18]  Seyed Mehran Kazemi,et al.  RelNN: A Deep Neural Model for Relational Learning , 2017, AAAI.

[19]  John L. Pollock The ‘possible worlds’ analysis of counterfactuals , 1976 .

[20]  Matthew Lease,et al.  Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking , 2018, UIST.

[21]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[22]  Mark O. Riedl,et al.  Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations , 2017, AIES.

[23]  Stephen Muggleton,et al.  Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP , 2018, Machine Learning.

[24]  Luc De Raedt,et al.  Inductive Logic Programming: Theory and Methods , 1994, J. Log. Program..

[25]  Mark E. Stickel,et al.  A prolog-like inference system for computing minimum-cost abductive explanations in natural-language interpretation , 1991, Annals of Mathematics and Artificial Intelligence.

[26]  Amina Adadi,et al.  Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.

[27]  Amit Dhurandhar,et al.  Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.

[28]  Hisao Tamaki,et al.  OLD Resolution with Tabulation , 1986, ICLP.

[29]  Joseph Jay Williams,et al.  The role of explanation in discovery and generalization: evidence from category learning , 2010, ICLS.

[30]  Ute Schmid,et al.  A Closer Look at Structural Similarity in Analogical Transfer1 , 2002 .

[31]  Patrick Henry Winston,et al.  Learning structural descriptions from examples , 1970 .

[32]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[33]  Ute Schmid,et al.  Beneficial and harmful explanatory machine learning , 2020, Machine Learning.

[34]  Sergio Gomez Colmenarejo,et al.  Hybrid computing using a neural network with dynamic external memory , 2016, Nature.

[35]  A. Culyer Thurstone’s Law of Comparative Judgment , 2014 .

[36]  D. Gentner,et al.  PSYCHOLOGICAL SCIENCE Research Article STRUCTURAL ALIGNMENT IN COMPARISON: No Difference Without Similarity , 2022 .

[37]  Herbert H. Clark,et al.  Semantics: A new outline. , 1976 .

[38]  Jure Leskovec,et al.  Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.