Motion Generation during Vocalized Emotional Expressions and Evaluation in Android Robots

Vocalized emotional expressions such as laughter and surprise often occur in natural dialogue interactions and are important factors to be considered in order to achieve smooth robot-mediated communication. Miscommunication may be caused if there is a mismatch between audio and visual modalities, especially in android robots, which have a highly humanlike appearance. In this chapter, motion generation methods are introduced for laughter and vocalized surprise events, based on analysis results of human behaviors during dialogue interactions. The effectiveness of controlling different modalities of the face, head, and upper body (eyebrow raising, eyelid widening/narrowing, lip corner/cheek raising, eye blinking, head motion, and torso motion control) and different motion control levels are evaluated using an android robot. Subjective experiments indicate the importance of each modality in the perception of motion naturalness (humanlikeness) and the degree of emotional expression.

[1]  L. Devillers,et al.  Positive and Negative emotional states behind the laughs in spontaneous spoken dialogs , 2007 .

[2]  Takaaki Kuratate,et al.  Linking facial animation, head motion and speech acoustics , 2002, J. Phonetics.

[3]  Daniel Thalmann,et al.  A dynamic wrinkle model in facial animation and skin ageing , 1995, Comput. Animat. Virtual Worlds.

[4]  Cynthia Breazeal,et al.  Emotion and sociable humanoid robots , 2003, Int. J. Hum. Comput. Stud..

[5]  Atsuo Takanishi,et al.  Design of the humanoid robot KOBIAN - preliminary analysis of facial and whole body emotion expression capabilities- , 2008, Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots.

[6]  João Sequeira,et al.  A Multimodal Emotion Detection System during Human-Robot Interaction , 2013, Sensors.

[7]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[8]  Hiroshi Ishiguro,et al.  Evaluation of formant-based lip motion generation in tele-operated humanoid robots , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Hiroshi Ishiguro,et al.  Head motions during dialogue speech and nod timing control in humanoid robots , 2010, HRI 2010.

[10]  Hiroshi Ishiguro,et al.  Generation of Nodding, Head tilting and Gazing for Human-Robot speech Interaction , 2013, Int. J. Humanoid Robotics.

[11]  Bülent Özgüç,et al.  Realistic speech animation of synthetic faces , 1998, Proceedings Computer Animation '98 (Cat. No.98EX169).

[12]  Hiroshi Ishiguro,et al.  Analysis and generation of laughter motions, and evaluation in an android robot , 2019, APSIPA Transactions on Signal and Information Processing.

[13]  P. Ekman,et al.  The Duchenne smile: emotional expression and brain physiology. II. , 1990, Journal of personality and social psychology.

[14]  Dong-Wook Lee,et al.  Designing of android head system by applying facial muscle mechanism of humans , 2012, 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012).

[15]  P. Ekman,et al.  Head and body cues in the judgment of emotion: a reformulation. , 1967, Perceptual and motor skills.

[16]  Danilo De Rossi,et al.  HEFES: An Hybrid Engine for Facial Expressions Synthesis to control human-like androids and avatars , 2012, 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob).

[17]  Radoslaw Niewiadomski,et al.  Laugh-aware virtual agent and its impact on user amusement , 2013, AAMAS.

[18]  Radoslaw Niewiadomski,et al.  Towards Multimodal Expression of Laughter , 2012, IVA.

[19]  Hiroshi Ishiguro,et al.  Audiovisual analysis of relations between laughter types and laughter motions , 2016 .

[20]  Takashi Minato,et al.  Motion Analysis in Vocalized Surprise Expressions and Motion Generation in Android Robots , 2017, IEEE Robotics and Automation Letters.

[21]  D. Massaro,et al.  Perceiving affect from the voice and the face , 1996, Psychonomic bulletin & review.

[22]  Takashi Minato,et al.  Novel Speech Motion Generation by Modeling Dynamics of Human Speech Production , 2017, Front. Robot. AI.

[23]  N. Campbell Whom we laugh with affects how we laugh , 2007 .

[24]  Hiroshi Ishiguro,et al.  Head motion during dialogue speech and nod timing control in humanoid robots , 2010, 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[25]  Peter Robinson,et al.  Decoupling facial expressions and head motions in complex emotions , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[26]  Catherine Pelachaud,et al.  Laughter animation synthesis , 2014, AAMAS.

[27]  David Loza,et al.  Application of the FACS in the design and construction of a mechatronic head with realistic appearance , 2013 .

[28]  Roxane Bertrand,et al.  About the relationship between eyebrow movements and Fo variations , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[29]  Radoslaw Niewiadomski,et al.  Rhythmic Body Movements of Laughter , 2014, ICMI.

[30]  T. Tsuji,et al.  Development of the Face Robot SAYA for Rich Facial Expressions , 2006, 2006 SICE-ICASE International Joint Conference.

[31]  Yonas Tadesse,et al.  Graphical Facial Expression Analysis and Design Method: An Approach to Determine Humanoid Skin Deformation , 2012 .

[32]  Takashi Minato,et al.  Motion Analysis in Vocalized Surprise Expressions , 2017, INTERSPEECH.

[33]  Hong-Seok Kim,et al.  Development of an Android for Emotional Expression and Human Interaction , 2008 .