Audio-Driven Laughter Behavior Controller

It has been well documented that laughter is an important communicative and expressive signal in face-to-face conversations. Our work aims at building a laughter behavior controller for a virtual character which is able to generate upper body animations from laughter audio given as input. This controller relies on the tight correlations between laughter audio and body behaviors. A unified continuous-state statistical framework, inspired by Kalman filter, is proposed to learn the correlations between laughter audio and head/torso behavior from a recorded laughter human dataset. Due to the lack of shoulder behavior data in the recorded human dataset, a rule-based method is defined to model the correlation between laughter audio and shoulder behavior. In the synthesis step, these characterized correlations are rendered in the animation of a virtual character. To validate our controller, a subjective evaluation is conducted where participants viewed the videos of a laughing virtual character. It compares the animations of a virtual character using our controller and a state of the art method. The evaluation results show that the laughter animations computed with our controller are perceived as more natural, expressing amusement more freely and appearing more authentic than with the state of the art method.

[1]  B. N. Barman Laughing , Crying , Sneezing and Yawning : Automatic Voice Dr iven Animation of Non-Speech Articulations ∗ , 2006 .

[2]  Thierry Dutoit,et al.  The AVLaughterCycle Database , 2010, LREC.

[3]  Catherine Pelachaud,et al.  From Non-verbal Signals Sequence Mining to Bayesian Networks for Interpersonal Attitudes Expression , 2014, IVA.

[4]  Catherine Pelachaud,et al.  Laughter animation synthesis , 2014, AAMAS.

[5]  Keiichi Tokuda,et al.  Speech parameter generation algorithms for HMM-based speech synthesis , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[6]  Catherine Pelachaud,et al.  Suggestions for Extending SAIBA with the VIB Platform , 2014 .

[7]  Victor B. Zordan,et al.  Laughing out loud: control for modeling anatomically inspired laughter using audio , 2008, SIGGRAPH 2008.

[8]  Paul Boersma,et al.  Praat, a system for doing phonetics by computer , 2002 .

[9]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[10]  W. Ruch,et al.  Assessing the „humorous temperament“: Construction of the facet and standard trait forms of the State-Trait-Cheerfulness-Inventory — STCI , 1996 .

[11]  Catherine Pelachaud,et al.  Modeling Multimodal Behaviors from Speech Prosody , 2013, IVA.

[12]  Wallace L. Chafe,et al.  The Importance of Not Being Earnest – The Feeling behind Laughter and Humor , 2011, Phonetica.

[13]  Radoslaw Niewiadomski,et al.  Towards Multimodal Expression of Laughter , 2012, IVA.

[14]  Maurizio Mancini,et al.  LOL - Laugh Out Loud , 2015, AAAI.

[15]  David H. Eberly,et al.  3D Game Engine Design, Second Edition: A Practical Approach to Real-Time Computer Graphics (The Morgan Kaufmann Series in Interactive 3D Technology) , 2006 .

[16]  Radoslaw Niewiadomski,et al.  Rhythmic Body Movements of Laughter , 2014, ICMI.

[17]  Radoslaw Niewiadomski,et al.  Perception of intensity incongruence in synthesized multimodal expressions of laughter , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[18]  Stefan Kopp,et al.  Real-Time Visual Prosody for Interactive Virtual Agents , 2015, IVA.

[19]  V. Adelswärd Laughter and Dialogue: The Social Significance of Laughter in Institutional Discourse , 1989, Nordic Journal of Linguistics.

[20]  Maurizio Mancini,et al.  Implementing and Evaluating a Laughing Virtual Character , 2017, ACM Trans. Internet Techn..

[21]  Radoslaw Niewiadomski,et al.  Laugh-aware virtual agent and its impact on user amusement , 2013, AAMAS.

[22]  Etienne de Sevin,et al.  GRETA: Towards an interactive conversational virtual Companion , 2010 .

[23]  William Curran,et al.  Laughter induction techniques suitable for generating motion capture data of laughter associated body movements , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[24]  Maurizio Mancini,et al.  Computing and Evaluating the Body Laughter Index , 2012, HBU.

[25]  Catherine Pelachaud,et al.  Upper Body Animation Synthesis for a Laughing Character , 2014, IVA.

[26]  Lei Shi,et al.  Perceptual enhancement of emotional mocap head motion: An experimental study , 2017, 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII).

[27]  Catherine Pelachaud,et al.  Speech-driven eyebrow motion synthesis with contextual Markovian models , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[28]  Thierry Dutoit,et al.  GMM-based synchronization rules for HMM-based audio-visual laughter synthesis , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[29]  Thierry Dutoit,et al.  Evaluation of HMM-based visual laughter synthesis , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[30]  Algirdas Pakstas,et al.  MPEG-4 Facial Animation: The Standard,Implementation and Applications , 2002 .

[31]  Ruili Wang,et al.  Ensemble methods for spoken emotion recognition in call-centres , 2007, Speech Commun..

[32]  Maurizio Mancini,et al.  Laughing with a Virtual Agent , 2015, Adaptive Agents and Multi-Agent Systems.

[33]  Timothy W. Bickmore,et al.  Designing Relational Agents as Long Term Social Companions for Older Adults , 2012, IVA.

[34]  Radoslaw Niewiadomski,et al.  Laugh When You're Winning , 2013, eNTERFACE.

[35]  Catherine Pelachaud,et al.  Lip animation synthesis: a unified framework for speaking and laughing virtual agent , 2015, AVSP.

[36]  Matthew Brand,et al.  Voice puppetry , 1999, SIGGRAPH.

[37]  P. Ekman,et al.  Felt, false, and miserable smiles , 1982 .

[38]  William Curran,et al.  Perception and Automatic Recognition of Laughter from Whole-Body Motion: Continuous and Categorical Perspectives , 2015, IEEE Transactions on Affective Computing.

[39]  Stefan Kopp,et al.  A Conversational Agent as Museum Guide - Design and Evaluation of a Real-World Application , 2005, IVA.

[40]  P. Ekman,et al.  The expressive pattern of laughter , 2001 .

[41]  Joshua Foer,et al.  Laughter: A Scientific Investigation , 2001, The Yale Journal of Biology and Medicine.

[42]  J. Bachorowski,et al.  The evolution of emotional experience: A "selfish-gene" account of smiling and laughter in early hominids and humans. , 2001 .

[43]  Radoslaw Niewiadomski,et al.  Automated Laughter Detection From Full-Body Movements , 2016, IEEE Transactions on Human-Machine Systems.

[44]  Engin Erzin,et al.  Affect-expressive hand gestures synthesis and animation , 2015, 2015 IEEE International Conference on Multimedia and Expo (ICME).

[45]  W. Chafe The Importance of Not Being Earnest , 2007 .

[46]  William Curran,et al.  Laughter Type Recognition from Whole Body Motion , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.

[47]  Timothy W. Bickmore,et al.  Hospital Buddy: A Persistent Emotional Support Companion Agent for Hospital Patients , 2012, IVA.