Multimodal Complex Emotions: Gesture Expressivity and Blended Facial Expressions

One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors.

[1]  Igor S. Pandzic,et al.  Facial motion cloning , 2003, Graph. Model..

[2]  J. Mccroskey,et al.  Nonverbal Behavior in Interpersonal Relations , 1987 .

[3]  David Matsumoto,et al.  More evidence for the universality of a contempt expression , 1992 .

[4]  M. D. Meijer The contribution of general features of body movement to the attribution of emotions , 1989 .

[5]  Didier Dubois Fuzzy sets and systems , 1980 .

[6]  Soo-Mi Choi,et al.  An Affective User Interface Based on Facial Expression Recognition and Eye-Gaze Tracking , 2005, ACII.

[7]  Michiel Wiggers,et al.  Judgments of facial expressions of emotion predicted from facial behavior , 1982 .

[8]  S. Blackburn,et al.  Meaning and Use , 1979 .

[9]  Peter F. Driessen,et al.  Gesture-Based Affective Computing on Motion Capture Data , 2005, ACII.

[10]  Stefan Kopp,et al.  Towards integrated microplanning of language and iconic gesture for multimodal output , 2004, ICMI '04.

[11]  EmoTV 1 : Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces , 2005 .

[12]  K. Scherer,et al.  Cues and channels in emotion recognition. , 1986 .

[13]  Paul J. W. ten Hagen,et al.  Emotion Disc and Emotion Squares: Tools to Explore the Facial Expression Space , 2003, Comput. Graph. Forum.

[14]  Andrea Kleinsmith,et al.  Towards Unsupervised Detection of Affective Body Posture Nuances , 2005, ACII.

[15]  Marc Schröder,et al.  Experimental study of affect bursts , 2003, Speech Commun..

[16]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[17]  J. N. Bassili Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. , 1979, Journal of personality and social psychology.

[18]  J. Newlove Laban for Actors and Dancers , 1993 .

[19]  P. Gallaher Individual differences in nonverbal behavior : dimensions of style , 1992 .

[20]  P. Ekman,et al.  Unmasking the face : a guide to recognizing emotions from facial clues , 1975 .

[21]  Catherine Pelachaud,et al.  Multimodal expressive embodied conversational agents , 2005, ACM Multimedia.

[22]  Jean-Claude Martin,et al.  Annotation of Emotions in Real-Life Video Interviews: Variability between Coders , 2006, LREC.

[23]  DevillersLaurence,et al.  2005 Special Issue , 2005 .

[24]  Jean-Claude Martin,et al.  Representing Real-Life Emotions in Audiovisual Data with Non Basic Emotional Patterns and Context Features , 2005, ACII.

[25]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[26]  D. McNeill Hand and Mind: What Gestures Reveal about Thought , 1992 .

[27]  Elisabeth André,et al.  Catch me if you can: exploring lying agents in social settings , 2005, AAMAS '05.

[28]  D. Keltner Signs of appeasement: evidence for the distinct displays of embarrassment, amusement, and shame , 1995 .

[29]  Massimo Piccardi,et al.  Fusing Face and Body Display for Bi-modal Emotion Recognition: Single Frame Analysis and Multi-frame Post Integration , 2005, ACII.

[30]  P. Gallaher Individual differences in nonverbal behavior: Dimensions of style. , 1992 .

[31]  T. D. Bui,et al.  Creating Emotions and Facial Expressions for Embodied Agents , 2004 .

[32]  Roddy Cowie,et al.  Multimodal databases of everyday emotion: facing up to complexity , 2005, INTERSPEECH.

[33]  Jörn Ostermann,et al.  Animation of synthetic faces in MPEG-4 , 1998, Proceedings Computer Animation '98 (Cat. No.98EX169).

[34]  H. Wallbott Bodily expression of emotion , 1998 .

[35]  Eddie Kohler,et al.  Real-time speech motion synthesis from recorded motions , 2004, SCA '04.

[36]  Nadia Magnenat-Thalmann,et al.  Imparting Individuality to Virtual Humans , 2002 .

[37]  K. Scherer,et al.  Acoustic profiles in vocal emotion expression. , 1996, Journal of personality and social psychology.

[38]  Hao Yan,et al.  Coordination and context-dependence in the generation of embodied conversation , 2000, INLG.

[39]  Roddy Cowie,et al.  Emotional speech: Towards a new generation of databases , 2003, Speech Commun..

[40]  Peter Robinson,et al.  Generalization of a Vision-Based Computational Model of Mind-Reading , 2005, ACII.

[41]  Roddy Cowie,et al.  Emotion Recognition and Synthesis Based on MPEG‐4 FAPs , 2002 .

[42]  S. Kaiser,et al.  Facial expressions as indicators of appraisal processes. , 2001 .

[43]  Isabella Poggi,et al.  Gestures. Meaning and use , 2003 .

[44]  Hans-Peter Seidel,et al.  Mixed feelings: expression of non-basic emotions in a muscle-based talking head , 2005, Virtual Reality.

[45]  Mark Steedman,et al.  APML, a Markup Language for Believable Behavior Generation , 2004, Life-like characters.

[46]  Algirdas Pakstas,et al.  MPEG-4 Facial Animation: The Standard,Implementation and Applications , 2002 .

[47]  Lori Lamel,et al.  Challenges in real-life emotion annotation and machine learning based detection , 2005, Neural Networks.

[48]  P. Ekman Darwin, Deception, and Facial Expression , 2003, Annals of the New York Academy of Sciences.

[49]  Boone Rt,et al.  Children's Decoding of Emotion in Expressive Body Movement: The Development of Cue Attunement. , 1998 .

[50]  Michael Kipp,et al.  Gesture generation by imitation: from human behavior to computer character animation , 2005 .

[51]  Maurizio Mancini,et al.  Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[52]  Bernadette Bouchon-Meunier,et al.  Towards general measures of comparison of objects , 1996, Fuzzy Sets Syst..

[53]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[54]  J. Cunningham,et al.  Children's decoding of emotion in expressive body movement: the development of cue attunement. , 1998, Developmental psychology.

[55]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[56]  J. Cacioppo,et al.  Electromyographic activity over facial muscle regions can differentiate the valence and intensity of affective reactions. , 1986, Journal of personality and social psychology.

[57]  K Gouta,et al.  [Emotion recognition: facial components associated with various emotions]. , 2000, Shinrigaku kenkyu : The Japanese journal of psychology.

[58]  P. Ekman,et al.  Felt, false, and miserable smiles , 1982 .

[59]  P. Ekman,et al.  The Repertoire of Nonverbal Behavior: Categories, Origins, Usage, and Coding , 1969 .

[60]  Igor S. Pandzic,et al.  MPEG-4 Facial Animation , 2002 .

[61]  Fabio Pianesi,et al.  Recognising emotions in human and synthetic faces: the role of the upper and lower parts of the face , 2005, IUI.