Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures

We propose a speech-driven beat gestures synthesis and animation framework.We use hidden semi-Markov models for the joint analysis of speech and arm gestures.We use a modified Viterbi algorithm with duration model to synthesize gestures.We use unit selection algorithm to generate animations from synthesis results.Synthesized animations are evaluated as natural in the subjective tests. Display Omitted We propose a framework for joint analysis of speech prosody and arm motion towards automatic synthesis and realistic animation of beat gestures from speech prosody and rhythm. In the analysis stage, we first segment motion capture data and speech audio into gesture phrases and prosodic units via temporal clustering, and assign a class label to each resulting gesture phrase and prosodic unit. We then train a discrete hidden semi-Markov model (HSMM) over the segmented data, where gesture labels are hidden states with duration statistics and frame-level prosody labels are observations. The HSMM structure allows us to effectively map sequences of shorter duration prosodic units to longer duration gesture phrases. In the analysis stage, we also construct a gesture pool consisting of gesture phrases segmented from the available dataset, where each gesture phrase is associated with a class label and speech rhythm representation. In the synthesis stage, we use a modified Viterbi algorithm with a duration model, that decodes the optimal gesture label sequence with duration information over the HSMM, given a sequence of prosody labels. In the animation stage, the synthesized gesture label sequence with duration and speech rhythm information is mapped into a motion sequence by using a multiple objective unit selection algorithm. Our framework is tested using two multimodal datasets in speaker-dependent and independent settings. The resulting motion sequence when accompanied with the speech input yields natural-looking and plausible animations. We use objective evaluations to set parameters of the proposed prosody-driven gesture animation system, and subjective evaluations to assess quality of the resulting animations. The conducted subjective evaluations show that the difference between the proposed HSMM based synthesis and the motion capture synthesis is not statistically significant. Furthermore, the proposed HSMM based synthesis is evaluated significantly better than a baseline synthesis which animates random gestures based on only joint angle continuity.

[1]  S. Levine,et al.  Gesture controllers , 2010, ACM Trans. Graph..

[2]  Carla Huls,et al.  EDWARD: full integration of language and action in a multimodal user interface , 1994, Int. J. Hum. Comput. Stud..

[3]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, SIGGRAPH 2004.

[4]  D. McNeill Gesture and Thought , 2005 .

[5]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[6]  Nikolaos Limnios,et al.  Semi-Markov Chains and Hidden Semi-Markov Models toward Applications: Their Use in Reliability and DNA Analysis , 2008 .

[7]  Dagmar Sternad,et al.  Sensitivity of Smoothness Measures to Movement Duration, Amplitude, and Arrests , 2009, Journal of motor behavior.

[8]  Steven Greenberg,et al.  What are the Essential Cues for Understanding Spoken Language? , 2001, IEICE Trans. Inf. Syst..

[9]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[10]  D. Loehr,et al.  Temporal, structural, and pragmatic synchrony between intonation and gesture , 2012 .

[11]  Norman I. Badler,et al.  Design of a Virtual Human Presenter , 2000, IEEE Computer Graphics and Applications.

[12]  E. Thelen,et al.  Hand, mouth and brain. The dynamic emergence of speech and gesture , 1999 .

[13]  Thomas S. Huang,et al.  Real-time speech-driven face animation with expressions using neural networks , 2002, IEEE Trans. Neural Networks.

[14]  Harry Shum,et al.  Learning dynamic audio-visual mapping with input-output Hidden Markov models , 2006, IEEE Trans. Multim..

[15]  Dafydd Gibbon,et al.  Measuring speech rhythm , 2001, INTERSPEECH.

[16]  Robert F. Port,et al.  Meter and speech , 2003, J. Phonetics.

[17]  Christoph Bregler,et al.  Video Rewrite: Driving Visual Speech with Audio , 1997, SIGGRAPH.

[18]  Carlos Busso,et al.  The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations , 2015, Language Resources and Evaluation.

[19]  Guest Editorial Gesture and speech in interaction : An overview , 2013 .

[20]  Fabien Ringeval,et al.  Novel Metrics of Speech Rhythm for the Assessment of Emotion , 2012, INTERSPEECH.

[21]  A. Murat Tekalp,et al.  An audio-driven dancing avatar , 2008, Journal on Multimodal User Interfaces.

[22]  Abeer Alwan,et al.  Acoustically-Driven Talking Face Synthesis using Dynamic Bayesian Networks , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[23]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents: Research Articles , 2004 .

[24]  K. Tuite The production of gesture , 1993 .

[25]  Francesc Alías,et al.  Gesture synthesis adapted to speech emphasis , 2014, Speech Commun..

[26]  S. Boker,et al.  Windowed cross-correlation and peak picking for the analysis of variability in the association between behavioral time series. , 2002, Psychological methods.

[27]  Sergey Levine,et al.  Gesture controllers , 2010, SIGGRAPH 2010.

[28]  Shrikanth S. Narayanan,et al.  Modeling Dynamics of Expressive Body Gestures In Dyadic Interactions , 2017, IEEE Transactions on Affective Computing.

[29]  Hideki Kawahara,et al.  YIN, a fundamental frequency estimator for speech and music. , 2002, The Journal of the Acoustical Society of America.

[30]  Tsuhan Chen,et al.  Audio-visual integration in multimodal communication , 1998, Proc. IEEE.

[31]  Hans-Peter Seidel,et al.  Annotated New Text Engine Animation Animation Lexicon Animation Gesture Profiles MR : . . . JL : . . . Gesture Generation Video Annotated Gesture Script , 2007 .

[32]  A. Murat Tekalp,et al.  Analysis of Head Gesture and Prosody Patterns for Prosody-Driven Head-Gesture Animation , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  F. Ramus,et al.  Correlates of linguistic rhythm in the speech signal , 1999, Cognition.

[34]  Mohamed Chetouani,et al.  Interpersonal Synchrony: A Survey of Evaluation Methods across Disciplines , 2012, IEEE Transactions on Affective Computing.

[35]  Steven Greenberg,et al.  Speaking in shorthand - A syllable-centric perspective for understanding pronunciation variation , 1999, Speech Commun..

[36]  F. Ramus,et al.  Correlates of linguistic rhythm in the speech signal , 1999, Cognition.

[37]  Shrikanth Narayanan,et al.  The USC Creative IT Database: A Multimodal Database of Theatrical Improvisation , 2010 .

[38]  D. McNeill Hand and Mind: What Gestures Reveal about Thought , 1992 .

[39]  Irene Albrecht,et al.  Automatic Generation of Non-Verbal Facial Expressions from Speech , 2002 .

[40]  Christoph Bregler,et al.  Mood swings: expressive speech animation , 2005, TOGS.

[41]  Shunzheng Yu,et al.  Hidden semi-Markov models , 2010, Artif. Intell..

[42]  Engin Erzin,et al.  Affect-expressive hand gestures synthesis and animation , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[43]  Engin Erzin,et al.  Multimodal analysis of speech prosody and upper body gestures using hidden semi-Markov models , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[44]  Angeliki Metallinou,et al.  Analysis of interaction attitudes using data-driven hand gesture phrases , 2014, IEEE International Conference on Acoustics, Speech, and Signal Processing.

[45]  Rashid Ansari,et al.  Multimodal signal analysis of prosody and hand motion: Temporal correlation of speech and gestures , 2002, 2002 11th European Signal Processing Conference.

[46]  Carlos Busso,et al.  Interrelation Between Speech and Facial Gestures in Emotional Utterances: A Single Subject Study , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[47]  Shrikanth S. Narayanan,et al.  Automatic Prosodic Event Detection Using Acoustic, Lexical, and Syntactic Evidence , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[48]  A. Kendon Gesticulation and Speech: Two Aspects of the Process of Utterance , 1981 .

[49]  Catherine Pelachaud,et al.  Multimodal expressive embodied conversational agents , 2005, ACM Multimedia.

[50]  Norbert Reithinger,et al.  VirtualHuman: dialogic and affective interaction with virtual characters , 2006, ICMI '06.

[51]  Sam Tilsen,et al.  Low-frequency Fourier analysis of speech rhythm. , 2008, The Journal of the Acoustical Society of America.

[52]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[53]  Carlos Gussenhoven,et al.  Durational variability in speech and the Rhythm Class Hypothesis , 2002 .

[54]  Carlos Busso,et al.  Generating Human-Like Behaviors Using Joint, Speech-Driven Models for Conversational Agents , 2012, IEEE Transactions on Audio, Speech, and Language Processing.

[55]  Anastassia Loukina,et al.  Rhythm measures and dimensions of durational variation in speech. , 2011, The Journal of the Acoustical Society of America.

[56]  Yuyu Xu,et al.  Virtual character performance from speech , 2013, SCA '13.