Evaluating Cognitive and Affective Intelligent Agent Explanations in a Long-Term Health-Support Application for Children with Type 1 Diabetes

Explanation of actions is important for transparency of-, and trust in the decisions of smart systems. Literature suggests that emotions and emotion words - in addition to beliefs and goals - are used in human explanations of behaviour. Furthermore, research in e-health support systems and human-robot interaction stresses the need for studying long-term interaction with users. However, state of the art explainable artificial intelligence for intelligent agents focuses mainly on explaining an agent's behaviour based on the underlying beliefs and goals in short-term experiments. In this paper, we report on a long-term experiment in which we tested the effect of cognitive, affective and lack of explanations on children's motivation to use an e-health support system. Children (aged 6–14) suffering from type 1 diabetes mellitus interacted with a virtual robot as part of the e-health system over a period of 2.5 – 3 months. Children alternated between the three conditions. Agent behaviours that were explained to the children included why 1) the agent asks a certain quiz question; 2) the agent provides a specific tip (a short instruction) about diabetes; or, 3) the agent provides a task suggestion, e.g., play a quiz, or, watch a video about diabetes. Their motivation was measured by counting how often children would follow the agent's suggestion, how often they would continue to play the quiz or ask for an additional tip, and how often they would request an explanation from the system. Surprisingly, children proved to follow task suggestions more often when no explanation was given, while other explanation effects did not appear. This is to our knowledge the first longterm study to report empirical evidence for an agent explanation effect, challenging the next studies to uncover the underlying mechanism.

[1]  Koen V. Hindriks,et al.  Do You Get It? User-Evaluated Explainable BDI Agents , 2010, MATES.

[2]  Koen V. Hindriks,et al.  Personalised self-explanation by robots: The role of goals versus beliefs in robot-action explanation for children and adults , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[3]  John D. Lee,et al.  Trust in Automation: Designing for Appropriate Reliance , 2004 .

[4]  K. Suzanne Barber,et al.  Comprehending agent software , 2005, AAMAS '05.

[5]  J. Vermunt,et al.  Congruence and friction between learning and teaching , 1999 .

[6]  Russell Beale,et al.  Affective interaction: How emotional agents affect users , 2009, Int. J. Hum. Comput. Stud..

[7]  Ana Paiva,et al.  Social Robots for Long-Term Interaction: A Survey , 2013, International Journal of Social Robotics.

[8]  Bonnie M. Muir,et al.  Trust in automation. I: Theoretical issues in the study of trust and human intervention in automated systems , 1994 .

[9]  Yuying Shan,et al.  Smartphone interventions for long-term health management of chronic diseases: an integrative review. , 2014, Telemedicine journal and e-health : the official journal of the American Telemedicine Association.

[10]  Cristina Conati,et al.  Providing adaptive support to the understanding of instructional material , 2001, IUI '01.

[11]  Ning Wang,et al.  Trust calibration within a human-robot team: Comparing automatically generated explanations , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[12]  John-Jules Ch. Meyer,et al.  Guidelines for developing explainable cognitive models , 2010 .

[13]  Koen V. Hindriks,et al.  The role of emotion in self-explanations by cognitive agents , 2017, 2017 Seventh International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW).

[14]  H. Chad Lane,et al.  Building Explainable Artificial Intelligence Systems , 2006, AAAI.

[15]  Anind K. Dey,et al.  Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.

[16]  L. Richard Ye,et al.  The Impact of Explanation Facilities in User Acceptance of Expert System Advice , 1995, MIS Q..

[17]  P. Churchland Folk Psychology and the Explanation of Human Behavior , 1989 .

[18]  Sabine A. Döring Explaining Action by Emotion , 2003 .

[19]  Peter Carey,et al.  Data Protection: A Practical Guide to UK and EU Law , 2004 .

[20]  Wendy Ju,et al.  Social robots and virtual agents as lecturers for video instruction , 2016, Comput. Hum. Behav..

[21]  F. Keil,et al.  Explanation and understanding , 2015 .

[22]  Frank E. Ritter,et al.  Designs for explaining intelligent agents , 2009, Int. J. Hum. Comput. Stud..

[23]  Davide Calvaresi,et al.  Explainable Agents and Robots: Results from a Systematic Literature Review , 2019, AAMAS.

[24]  Maartje M. A. de Graaf,et al.  Why Do They Refuse to Use My Robot?: Reasons for Non-Use Derived from a Long-Term Home Study , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[25]  B. Malle How the Mind Explains Behavior: Folk Explanations, Meaning, and Social Interaction , 2004 .