Why and why not explanations improve the intelligibility of context-aware intelligent systems

Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.

[1]  Joachim Diederich,et al.  Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..

[2]  Austin Henderson,et al.  Making sense of sensing systems: five questions for designers and researchers , 2002, CHI.

[3]  Padraig Cunningham,et al.  A Case-Based Explanation System for Black-Box Systems , 2005, Artificial Intelligence Review.

[4]  Mark Weiser,et al.  Designing Calm Technology , 2004 .

[5]  B. C. Smith,et al.  Organising User Interfaces Around Reflective Accounts , 1998 .

[6]  Regina A. Pomranky,et al.  The role of trust in automation reliance , 2003, Int. J. Hum. Comput. Stud..

[7]  Martin Mozina,et al.  Nomograms for Visualization of Naive Bayesian Classifier , 2004, PKDD.

[8]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[9]  Brad A. Myers,et al.  Designing the whyline: a debugging interface for asking questions about program behavior , 2004, CHI.

[10]  Edward H. Shortliffe,et al.  Production Rules as a Representation for a Knowledge-Based Consultation Program , 1977, Artif. Intell..

[11]  Colin Potts,et al.  Design of Everyday Things , 1988 .

[12]  Joe Tullio,et al.  How it works: a field study of non-technical users interacting with an intelligent system , 2007, CHI.

[13]  James H. Aylor,et al.  Computer for the 21st Century , 1999, Computer.

[14]  Keith Cheverst,et al.  Exploring Issues of User Model Transparency and Proactive Behaviour in an Office Environment Control System , 2005, User Modeling and User-Adapted Interaction.

[15]  W. Keith Edwards,et al.  Intelligibility and Accountability: Human Considerations in Context-Aware Systems , 2001, Hum. Comput. Interact..

[16]  Izak Benbasat,et al.  Explanations From Intelligent Systems: Theoretical Foundations and Implications for Practice , 1999, MIS Q..

[17]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[18]  Judy Kay,et al.  PersonisAD: Distributed, Active, Scrutable Model Framework for Context-Aware Services , 2007, Pervasive.

[19]  Brad A. Myers,et al.  Answering why and why not questions in user interfaces , 2006, CHI.

[20]  Scott E. Hudson,et al.  Responsiveness in instant messaging: predictive models supporting inter-personal communication , 2006, CHI.

[21]  Deborah L. McGuinness,et al.  A Categorization of Explanation Questions for Task Processing Systems , 2007, ExaCt.

[22]  Bill N. Schilit,et al.  Context-aware computing applications , 1994, Workshop on Mobile Computing Systems and Applications.

[23]  Anind K. Dey,et al.  Support for context-aware intelligibility and control , 2009, CHI.

[24]  John R. Anderson,et al.  Cognitive Tutors: Lessons Learned , 1995 .

[25]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.