Towards multimodal expression of information reliability in HRI

In this paper, we discuss preliminary studies on expressive presentation in the context of human-robot interaction. The focus is on the robot conveying its attitude towards the information it is presenting to a human partner using non-verbal means, in this case facial expressions. The goal of the research is to understand better how natural social behaviour and emotional stance of the speaker can manifest itself in practical information providing settings. We present a small prototype of the Furhat robot application where the robot interacts with human partners and provides information which it judges reliable or not, and conveys its attitude with facial expressions. The assumption is that the users are more likely to consider the information that the robot presents as reliable and trustworthy if it is accompanied by the robot's positive and supporting facial expressions (e.g. smile) and consequently, accept and adopt the information as part of their own knowledge, whereas if the robot accompanies its presentation with a frowning or disgusted facial expression, the user is likely to associate the content with negative connotations and decrease the reliability and trustworthiness of the information.

[1]  G. Wilcock,et al.  Should robots indicate the trustworthiness of information from knowledge graphs? , 2022, 2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW).

[2]  G. Wilcock,et al.  Cooperative and Uncooperative Behaviour in Task-oriented Dialogues with Social Robots , 2022, 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).

[3]  G. Wilcock,et al.  Conversational AI and Knowledge Graphs for Social Robot Interaction , 2022, 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Roderic Page Knowledge Graphs , 2021, Biodiversity Information Science and Standards.

[5]  Gerard de Melo,et al.  Knowledge Graphs , 2020, ACM Comput. Surv..

[6]  Soon Ang,et al.  Cultural Intelligence , 2019, The Cambridge Handbook of Intelligence.

[7]  Kristiina Jokinen,et al.  Dialogue Models for Socially Intelligent Robots , 2018, ICSR.

[8]  Hung-Hsuan Huang,et al.  Embodied Conversational Agents , 2009 .

[9]  Nick Pawlowski,et al.  Rasa: Open Source Language Understanding and Dialogue Management , 2017, ArXiv.

[10]  Dimitra Anastasiou,et al.  Evaluation of WikiTalk - User Studies of Human-Robot Interaction , 2013, HCI.

[11]  Masafumi Nishida,et al.  Gaze and turn-taking behavior in casual conversational interactions , 2013, TIIS.

[12]  Yukiko I. Nakano,et al.  Investigating culture-related aspects of behavior for virtual characters , 2013, Autonomous Agents and Multi-Agent Systems.

[13]  Kristiina Jokinen,et al.  Visual interaction and conversational activity , 2012, Gaze-In '12.

[14]  Costanza Navarretta,et al.  Feedback in Nordic First-Encounters: a Comparative Study , 2012, LREC.

[15]  A. Bakker,et al.  Positive Emotions , 2011 .

[16]  Gabriel Skantze,et al.  Furhat: A Back-Projected Human-Like Robot Head for Multiparty Human-Machine Interaction , 2011, COST 2102 Training School.

[17]  Hung-Hsuan Huang,et al.  From observation to simulation: generating culture-specific behavior for interactive systems , 2009, AI & SOCIETY.

[18]  Kristiina Jokinen,et al.  Constructive Dialogue Modelling - Speech Interaction and Rational Agents , 2009, Wiley series in agent technology.

[19]  Ana Paiva,et al.  But that was in another country: agents and intercultural empathy , 2009, AAMAS.

[20]  Pei-Luen Patrick Rau,et al.  Effects of communication style and culture on ability to accept recommendations from robots , 2009, Comput. Hum. Behav..

[21]  Tatsuya Nomura,et al.  The influence of people’s culture and prior experiences with Aibo on their attitude towards robots , 2006, AI & SOCIETY.

[22]  L. F. Barrett Solving the Emotion Paradox: Categorization and the Experience of Emotion , 2006, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[23]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[24]  D. McNeill Hand and Mind , 1995 .

[25]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[26]  P. Ekman,et al.  DIFFERENCES Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion , 2004 .

[27]  G. Wilcock Generating More Intelligent Responses and Explanations with Conversational AI and Knowledge Graphs , 2022 .

[28]  Koichiro Yoshino,et al.  Modeling Trust and Empathy for Socially Interactive Robots , 2021, Multimodal Agents for Ageing and Multicultural Societies.

[29]  Iryna Gurevych,et al.  Evidence-based Verification for Real World Information Needs , 2021, ArXiv.

[30]  Kristiina Jokinen,et al.  Multimodal open-domain conversations with robotic platforms , 2019, Multimodal Behavior Analysis in the Wild.

[31]  Thomas Wiben Jensen,et al.  Body - Language - Communication , 2014 .

[32]  Mary-Anne Williams,et al.  Social Robotics , 2012, Lecture Notes in Computer Science.

[33]  Kristiina Jokinen,et al.  Constructive Dialogue Modelling , 2007 .

[34]  C. Pelachaud,et al.  GRETA. A BELIEVABLE EMBODIED CONVERSATIONAL AGENT , 2005 .

[35]  Cory D. Kidd,et al.  Explorations in engagement for humans and robots , 2005, Artif. Intell..

[36]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[37]  Bernard Rimé,et al.  Fundamentals of nonverbal behavior , 1991 .

[38]  C. Goodwin,et al.  Gesture and coparticipation in the activity of searching for a word , 1986 .