Belief-based Agent Explanations to Encourage Behaviour Change

Explainable? virtual agents provide insight into the agent's decision-making process, which aims to improve the user's acceptance of the agent's actions or recommendations. However, explainable agents commonly rely on their own knowledge and goals in providing explanations, rather than the beliefs, plans or goals of the user. Little is known about the user perception of such tailored explanations and their impact on their behaviour change. In this paper, we explore the role of belief-based explanation by proposing a user-aware explainable agent by embedding the cognitive agent architecture with a user model and explanation engine to provide a tailored explanation. To make a clear conclusion on the role of explanation in behaviour change intentions, we investigated whether the level of behaviour change intentions is due to building agent-user rapport through the use of empathic language or due to trusting the agent's understanding through providing explanation. Hence, we designed two versions of a virtual advisor agent, empathic and neutral, to reduce study stress among university students and measured students' rapport levels and intentions to change their behaviour. Our results showed that the agent could build a trusted relationship with the user with the help of the explanation regardless of the level of rapport. The results, further, showed that nearly all the recommendations provided by the agent highly significantly increased the intention of the user to change their behavior.

[1]  Bernard Moulin,et al.  Explanation and Argumentation Capabilities:Towards the Creation of More Persuasive Agents , 2002, Artificial Intelligence Review.

[2]  Douglas Walton,et al.  Dialogical Models of Explanation , 2007, ExaCt.

[3]  Joseph B. Lyons,et al.  Certifiable Trust in Autonomous Systems: Making the Intractable Tangible , 2017, AI Mag..

[4]  M. Vallis,et al.  Working with people to make changes: a behavioural change approach used in chronic low back pain rehabilitation. , 2014, Physiotherapy Canada. Physiotherapie Canada.

[5]  Koen V. Hindriks,et al.  Personalised self-explanation by robots: The role of goals versus beliefs in robot-action explanation for children and adults , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[6]  Deborah L. McGuinness,et al.  Toward establishing trust in adaptive agents , 2008, IUI '08.

[7]  Ana Paiva,et al.  FAtiMA Modular: Towards an Agent Architecture with a Generic Appraisal Framework , 2014, Emotion Modeling.

[8]  Ruth Aylett,et al.  Approaches to Verbal Persuasion in Intelligent User Interfaces , 2011 .

[9]  Michael A. Rupp,et al.  Intelligent Agent Transparency in Human–Agent Teaming for Multi-UxV Management , 2016, Hum. Factors.

[10]  Ana Paiva,et al.  Empathy in Virtual Agents and Robots , 2017, ACM Trans. Interact. Intell. Syst..

[11]  Wolter Pieters,et al.  Explanation and trust: what to tell the user in security and AI? , 2011, Ethics and Information Technology.

[12]  Koen V. Hindriks,et al.  Do You Get It? User-Evaluated Explainable BDI Agents , 2010, MATES.

[13]  Haggai Roitman,et al.  Increasing patient safety using explanation-driven personalized content recommendation , 2010, IHI.

[14]  Deborah Richards,et al.  First Impressions Count! The Role of the Human's Emotional State on Rapport Established with an Empathic versus Neutral Virtual Therapist , 2021, IEEE Transactions on Affective Computing.

[15]  Gary Klein,et al.  Explaining Explanation For “Explainable Ai” , 2018, Proceedings of the Human Factors and Ergonomics Society Annual Meeting.