Please delete that! Why should I?

Dare2Del is an assistive system which facilitates intentional forgetting of irrelevant digital objects. For an assistive system to be helpful, the user has to trust the system’s decisions. Explanations are a crucial component in establishing this trust. We will introduce different types of explanations which can vary along different dimensions such as level of detail and modality suitable for different application contexts. We will outline the cognitive companion system Dare2Del which is intended to support users managing digital objects in a working environment. Core of Dare2Del is an interpretable machine learning mechanism which induces decision rules to classify whether a digital objects is irrelevant. In this paper, we focus on irrelevance of files. We formalize the decision making process as logic inference. Finally, we present a method to generate verbal explanations for irrelevance decisions and point out how such explanations can be constructed on different levels of details. Furthermore, we show how verbal explanations can be related to the path context of the file. We conclude with a short discussion of the scope and restrictions of our approach.

[1]  Werner Nutt,et al.  Basic Description Logics , 2003, Description Logic Handbook.

[2]  John K. Kruschke,et al.  The Cambridge Handbook of Computational Psychology: Models of Categorization , 2008 .

[3]  Stephen Muggleton,et al.  Ultra-Strong Machine Learning: comprehensibility of programs learned with ILP , 2018, Machine Learning.

[4]  Anthony Jameson,et al.  Making systems sensitive to the user's changing resource limitations , 1999, Knowl. Based Syst..

[5]  Michael Siebers,et al.  Explaining Black-Box Classifiers with ILP - Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules , 2018, ILP.

[6]  Donald Michie,et al.  Machine Learning in the Next Five Years , 1988, EWSL.

[7]  Barbara Hammer,et al.  Interpretable machine learning with reject option , 2018, Autom..

[8]  Johannes Fürnkranz,et al.  On Cognitive Preferences and the Interpretability of Rule-based Models , 2018, ArXiv.

[9]  J. Potter,et al.  Discourse and Social Psychology: Beyond Attitudes and Behaviour , 1987 .

[10]  Martin Hilbert,et al.  The World’s Technological Capacity to Store, Communicate, and Compute Information , 2011, Science.

[11]  Ellen Enkel,et al.  Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices , 2016 .

[12]  Martin Hilbert,et al.  Info Capacity| How to Measure the World’s Technological Capacity to Communicate, Store and Compute Information? Part I: Results and Scope , 2012 .

[13]  Daniel D. Suthers,et al.  An analysis of explanation and its implications for the design of explanation planners , 1993 .

[14]  Jerry Alan Fails,et al.  Interactive machine learning , 2003, IUI '03.

[15]  Jure Leskovec,et al.  Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.

[16]  E J Huth The information explosion. , 1989, Bulletin of the New York Academy of Medicine.

[17]  Stephen Muggleton,et al.  Inverse entailment and progol , 1995, New Generation Computing.

[18]  Judith Masthoff,et al.  Explaining Recommendations: Design and Evaluation , 2015, Recommender Systems Handbook.

[19]  Ute Schmid,et al.  Automatic Generation of Analogous Problems to Help Resolving Misconceptions in an Intelligent Tutor System for Written Subtraction , 2016, ICCBR Workshops.

[20]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[21]  Klaus Moser,et al.  Coping with information overload in email communication: Evaluation of a training intervention , 2010, Comput. Hum. Behav..

[22]  Luc De Raedt,et al.  Logical and relational learning , 2008, Cognitive Technologies.

[23]  T. Lombrozo Explanatory Preferences Shape Learning and Inference , 2016, Trends in Cognitive Sciences.

[24]  Andy J. Wills,et al.  Models of Categorization , 2013 .

[25]  Ute Schmid,et al.  Inductive Programming as Approach to Comprehensible Machine Learning , 2018, DKB/KIK@KI.

[26]  Stephen K. Reed,et al.  Use of examples and procedures in problem solving , 1991 .

[27]  Izak Benbasat,et al.  Recommendation Agents for Electronic Commerce: Effects of Explanation Facilities on Trusting Beliefs , 2007, J. Manag. Inf. Syst..

[28]  Andreas Wendemuth,et al.  Companion-Technology for Cognitive Technical Systems , 2011, KI - Künstliche Intelligenz.

[29]  Johannes Fürnkranz,et al.  On cognitive preferences and the plausibility of rule-based models , 2018, Machine Learning.

[30]  Stephen Muggleton,et al.  Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited , 2013, Machine Learning.

[31]  William J. Clancey,et al.  The Epistemology of a Rule-Based Expert System - A Framework for Explanation , 1981, Artif. Intell..

[32]  Nava Tintarev,et al.  Evaluating the effectiveness of explanations for recommender systems , 2012, User Modeling and User-Adapted Interaction.

[33]  Li Chen,et al.  Trust-inspiring explanation interfaces for recommender systems , 2007, Knowl. Based Syst..

[34]  Huth Ej The information explosion. , 1989, Bulletin of the New York Academy of Medicine.

[35]  Elisabeth André,et al.  Proceedings of the 8th international conference on Intelligent user interfaces , 2002 .

[36]  Mark Sadoski,et al.  Imagery and Text: A Dual Coding Theory of Reading and Writing , 2000 .

[37]  Ute Schmid,et al.  Inductive rule learning on the knowledge level , 2011, Cognitive Systems Research.

[38]  Patrick Henry Winston,et al.  Learning structural descriptions from examples , 1970 .

[39]  Sumit Gulwani,et al.  Inductive programming meets the real world , 2015, Commun. ACM.

[40]  Robert A. Bjork,et al.  Varieties of goal-directed forgetting , 1998 .

[41]  D. Gentner,et al.  Commonalities and differences in similarity comparisons , 1996, Memory & cognition.

[42]  Andreas Wendemuth,et al.  Companion-Technology for Cognitive Technical Systems , 2016, KI - Künstliche Intelligenz.

[43]  Michael Siebers,et al.  Requirements for a companion system to support identifying irrelevancy , 2017, 2017 International Conference on Companion Technology (ICCT).

[44]  Luc De Raedt,et al.  Inductive Logic Programming: Theory and Methods , 1994, J. Log. Program..

[45]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[46]  Kenneth D. Forbus,et al.  Companion Cognitive Systems: A Step towards Human-Level AI , 2004, AI Mag..

[47]  P. Doyle,et al.  Confidentiality, Disclosure and Data Access: Theory and Practical Applications for Statistical Agencies , 2001 .

[48]  Gary Marcus,et al.  Deep Learning: A Critical Appraisal , 2018, ArXiv.

[49]  Ute Schmid,et al.  A Human Like Incremental Decision Tree Algorithm: Combining Rule Learning, Pattern Induction, and Storing Examples , 2017, LWDA.