Generation of Head Motion During Dialogue Speech, and Evaluation in Humanoid Robots

Head motion occurs naturally and in synchrony with speech during human dialogue communication and may carry paralinguistic information such as intentions, attitudes, and emotions. Therefore, natural-looking head motion by a robot is important for smooth human–robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, we proposed a model for generating nodding and head tilting and evaluated for different types of humanoid robot. Analysis of subjective scores showed that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people’s original motions without gaze information. We also found that an upward motion of the face can be used by robots that do not have a mouth in order to provide the appearance that an utterance is taking place. Finally, we conducted an experiment in which participants act as visitors to an information desk attended by robots. Evaluation results indicated that our model is equally effective as directly mapping people’s original motions with gaze information in terms of perceived naturalness.

[1]  Volker Strom,et al.  Visual prosody: facial movements accompanying speech , 2002, Proceedings of Fifth IEEE International Conference on Automatic Face Gesture Recognition.

[2]  Hiroshi Ishiguro,et al.  Analysis of relationship between head motion events and speech in dialogue conversations , 2014, Speech Communication.

[3]  Hiroshi Ishiguro,et al.  Generation of Nodding, Head tilting and Gazing for Human-Robot speech Interaction , 2013, Int. J. Humanoid Robotics.

[4]  Hiroshi Ishiguro,et al.  Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Zhigang Deng,et al.  Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[6]  Hiroshi Ishiguro,et al.  Evaluation of formant-based lip motion generation in tele-operated humanoid robots , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Jeffery A. Jones,et al.  Visual Prosody and Speech Intelligibility , 2004, Psychological science.

[8]  Björn Granström,et al.  Visual correlates to prominence in several expressive modes , 2006, INTERSPEECH.

[9]  V. Bruce,et al.  Do the eyes have it? Cues to the direction of social attention , 2000, Trends in Cognitive Sciences.

[10]  Hiroshi Ishiguro,et al.  Head motions during dialogue speech and nod timing control in humanoid robots , 2010, HRI 2010.

[11]  Trevor Darrell,et al.  Head gestures for perceptual interfaces: The role of context in improving recognition , 2007, Artif. Intell..

[12]  Hiroshi Ishiguro,et al.  Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts , 2006, INTERSPEECH.

[13]  Hiroshi Ishiguro,et al.  Speech-driven lip motion generation for tele-operated humanoid robots , 2011, AVSP.

[14]  Katsuhiko Shirai,et al.  Analysis of head movements and its role in spoken dialogue , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[15]  Takaaki Kuratate,et al.  Linking facial animation, head motion and speech acoustics , 2002, J. Phonetics.

[16]  Jon Oberlander,et al.  Corpus-based generation of head and eyebrow motion for an embodied conversational agent , 2007, Lang. Resour. Evaluation.

[17]  Louis-Philippe Morency,et al.  The effect of head-nod recognition in human-robot conversation , 2006, HRI '06.

[18]  Hiroshi Ishiguro,et al.  Development of an android robot for studying human-robot interaction , 2004 .

[19]  A. Boxer,et al.  Signal functions of infant facial expression and gaze direction during mother-infant face-to-face play. , 1979, Child development.

[20]  F. Kaplan,et al.  The challenges of joint attention , 2006 .

[21]  Hiroshi Ishiguro,et al.  Analysis of head motions and speech, and head motion control in an android , 2007 .

[22]  A. Murat Tekalp,et al.  Combined Gesture-Speech Analysis and Speech Driven Gesture Synthesis , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[23]  Minoru Asada,et al.  Learning for joint attention helped by functional development , 2006, Adv. Robotics.