Do You Get It? User-Evaluated Explainable BDI Agents

In this paper we focus on explaining to humans the behavior of autonomous agents, i.e., explainable agents. Explainable agents are useful for many reasons including scenario-based training (e.g. disaster training), tutor and pedagogical systems, agent development and debugging, gaming, and interactive storytelling. As the aim is to generate for humans plausible and insightful explanations, user evaluation of different explanations is essential. In this paper we test the hypothesis that different explanation types are needed to explain different types of actions. We present three different, generically applicable, algorithms that automatically generate different types of explanations for actions of BDI-based agents. Quantitative analysis of a user experiment (n=30), in which users rated the usefulness and naturalness of each explanation type for different agent actions, supports our hypothesis. In addition, we present feedback from the users about how they would explain the actions themselves. Finally, we hypothesize guidelines relevant for the development of explainable BDI agents.

[1]  W. Lewis Johnson,et al.  Agents that Learn to Explain Themselves , 1994, AAAI.

[2]  G. Nigel Gilbert,et al.  Explanation and dialogue , 1989, The Knowledge Engineering Review.

[3]  Jürgen Dix,et al.  Multi-Agent Programming: Languages, Tools and Applications , 2009 .

[4]  F. Keil,et al.  Explanation and understanding , 2015 .

[5]  Dirk Heylen,et al.  The Virtual Storyteller: story creation by intelligent agents , 2003 .

[6]  Michael van Lent,et al.  An Explainable Artificial Intelligence System for Small-unit Tactical Behavior , 2004, AAAI.

[7]  John-Jules Ch. Meyer,et al.  A Study into Preferred Explanations of Virtual Agent Behavior , 2009, IVA.

[8]  H. Chad Lane,et al.  Teaching Negotiation Skills through Practice and Reflection with Virtual Humans , 2006, Simul..

[9]  B. Malle,et al.  How People Explain Behavior: A New Theoretical Framework , 1999, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[10]  H. Chad Lane,et al.  Building Explainable Artificial Intelligence Systems , 2006, AAAI.

[11]  Joost Broekens,et al.  Formalizing Cognitive Appraisal : From Theory to Computation , 2006 .

[12]  Marc Cavazza,et al.  Character-Based Interactive Storytelling , 2002, IEEE Intell. Syst..

[13]  Amedeo Cesta,et al.  Evaluating Mixed-Initiative Systems: An Experimental Approach , 2006, ICAPS.

[14]  H. Chad Lane,et al.  Design recommendations to support automated explanation and tutoring , 2005 .

[15]  David F. Feldon,et al.  Cognitive task analysis , 2009 .

[16]  Koen V. Hindriks,et al.  Programming Rational Agents in GOAL , 2009, Multi-Agent Programming, Languages, Tools and Applications.

[17]  Arthur C. Graesser,et al.  AutoTutor: an intelligent tutoring system with mixed-initiative dialogue , 2005, IEEE Transactions on Education.