Justification Narratives for Individual Classifications

Machine learning models are now used extensively for decision making in diverse applications, but for non-experts they are essentially black boxes. While there has been some work on the explanation of classifications, these are targeted at the expert user. For the non-expert, a better model is one of justification not detailing how the model made its decision, but justifying it to the human user on his or her terms. In this paper we introduce the idea of a justification narrative: a simple model-agnostic mapping of the essential values underlying a classification to a semantic space. We present a package that automatically produces these narratives and realizes them visually or textually.

[1]  Edward H. Shortliffe,et al.  A model of inexact reasoning in medicine , 1990 .

[2]  E. Shortliffe,et al.  An analysis of physician attitudes regarding computer-based clinical consultation systems. , 1981, Computers and biomedical research, an international journal.

[3]  William R. Swartout,et al.  XPLAIN: A System for Creating and Explaining Expert Consulting Programs , 1983, Artif. Intell..

[4]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.

[5]  Carmen Lacave,et al.  A review of explanation methods for Bayesian networks , 2002, The Knowledge Engineering Review.

[6]  Zhiyong Lu,et al.  Explaining Naive Bayes Classifications , 2003 .

[7]  Detecting relevant variables and interactions for classification in Support Vector Machines , 2006 .

[8]  Ah-Hwee Tan,et al.  Explaining inferences in Bayesian networks , 2008, Applied Intelligence.

[9]  Judith Masthoff,et al.  A Survey of Explanations in Recommender Systems , 2007, 2007 IEEE 23rd International Conference on Data Engineering Workshop.

[10]  Marko Robnik-Sikonja,et al.  Explaining Classifications For Individual Instances , 2008, IEEE Transactions on Knowledge and Data Engineering.

[11]  Anind K. Dey,et al.  Toolkit to support intelligibility in context-aware applications , 2010, UbiComp.

[12]  Motoaki Kawanabe,et al.  How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..

[13]  Jo Vermeulen,et al.  Improving intelligibility and control in Ubicomp , 2010, UbiComp '10 Adjunct.