NarRob: A Humanoid Social Storyteller with Emotional Expression Capabilities

In this paper we propose a model of a robotic storyteller, focusing on its abilities to select the most appropriate gestures to accompany the story, trying to manifest also emotions related to the sentence that is being told. The robot is endowed with a repository of stories together with a set of gestures, inspired by those typically used by humans, that the robot learns by observation. The gestures are annotated by a number N of subjects, according to their particular meaning and considering a specific typology. They are exploited by the robot according to the story content to provide an engaging representation of the tale.

[1]  Bernard Robin,et al.  The Power of Digital Storytelling to Support Teaching and Learning , 2016 .

[2]  D. McNeill Hand and Mind: What Gestures Reveal about Thought , 1992 .

[3]  Klaus R. Scherer,et al.  The role of intonation in emotional expressions , 2005, Speech Commun..

[4]  Birgit Lugrin,et al.  There Once Was a Robot Storyteller: Measuring the Effects of Emotion and Non-verbal Behaviour , 2017, ICSR.

[5]  Raymond H. Cuijpers,et al.  Imitating Human Emotions with Artificial Facial Expressions , 2013, Int. J. Soc. Robotics.

[6]  Cynthia Breazeal,et al.  The Interplay of Robot Language Level with Children's Language Learning during Storytelling , 2015, HRI.

[7]  Klaus Diepold,et al.  Evaluation of a RGB-LED-based Emotion Display for Affective Agents , 2016, ArXiv.

[8]  Ana Paiva,et al.  Learning and Teaching Biodiversity Through a Storyteller Robot , 2017, ICIDS.

[9]  Giovanni Pilato,et al.  Data-driven Social Mood Analysis through the Conceptualization of Emotional Fingerprints , 2018 .

[10]  Carlo Strapparava,et al.  Learning to identify emotions in text , 2008, SAC '08.

[11]  Koen V. Hindriks,et al.  Effects of a robotic storyteller's moody gestures on storytelling perception , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[12]  Carlo Strapparava,et al.  SemEval-2007 Task 14: Affective Text , 2007, Fourth International Workshop on Semantic Evaluations (SemEval-2007).

[13]  Stefan Kopp,et al.  Gesture and speech in interaction: An overview , 2014, Speech Commun..

[14]  Brian Scassellati,et al.  Narratives with Robots: The Impact of Interaction Context and Individual Differences on Story Recall and Emotional Understanding , 2017, Front. Robot. AI.

[15]  Peter W. Foltz,et al.  An introduction to latent semantic analysis , 1998 .

[16]  Rainer Stiefelhagen,et al.  Implementation and evaluation of a constraint-based multimodal fusion system for speech and 3D pointing gestures , 2004, ICMI '04.