Contextual Factors and Adaptative Multimodal Human-Computer Interaction: Multi-level Specification of Emotion and Expressivity in Embodied Conversational Agents

In this paper we present an Embodied Conversational Agent (ECA) model able to display rich verbal and non-verbal behaviors. The selection of these behaviors should depend not only on factors related to her individuality such as her culture, her social and professional role, her personality, but also on a set of contextual variables (such as her interlocutor, the social conversation setting), and other dynamic variables (belief, goal, emotion). We describe the representation scheme and the computational model of behavior expressivity of the Expressive Agent System that we have developed. We explain how the multi-level annotation of a corpus of emotionally rich TV video interviews can provide context-dependent knowledge as input for the specification of the ECA (e.g. which contextual cues and levels of representation are required for enabling the proper recognition of the emotions).

[1]  Boone Rt,et al.  Children's Decoding of Emotion in Expressive Body Movement: The Development of Cue Attunement. , 1998 .

[2]  Mitsuru Ishizuka,et al.  Scripting affective communication with life-like characters in web-based interaction systems , 2002, Appl. Artif. Intell..

[3]  Michael Kipp,et al.  Gesture generation by imitation: from human behavior to computer character animation , 2005 .

[4]  Maurizio Mancini,et al.  Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[5]  W. Stroebe,et al.  Introduction to social psychology: A European perspective , 1988 .

[6]  Harry Bunt,et al.  Cooperative Multimodal Communication , 2001, Lecture Notes in Computer Science.

[7]  Carolyn G. Fidelman,et al.  The semiotics of French gestures , 1990 .

[8]  C. Creider Hand and Mind: What Gestures Reveal about Thought , 1994 .

[9]  K. Fischer,et al.  DESPERATELY SEEKING EMOTIONS OR: ACTORS, WIZARDS, AND HUMAN BEINGS , 2000 .

[10]  Dirk Heylen,et al.  Generating Embodied Information Presentations , 2005 .

[11]  Catherine Pelachaud,et al.  Behavior Planning for a Reflexive Agent , 2001, IJCAI.

[12]  J. Montepare,et al.  The Use of Body Movements and Gestures as Cues to Emotions in Younger and Older Adults , 1999 .

[13]  H. Noot,et al.  The jovial , the reserved and the robot Zs , 2003 .

[14]  M. D. Meijer,et al.  The attribution of aggression and grief to body movements: The effect of sex-stereotypes. , 1991 .

[15]  J. Newlove Laban for Actors and Dancers , 1993 .

[16]  M. D. Meijer The contribution of general features of body movement to the attribution of emotions , 1989 .

[17]  Mary McGee Wood,et al.  A Categorical Annotation Scheme for Emotion in the Linguistic Content of Dialogue , 2004, ADS.

[18]  P. Gallaher Individual differences in nonverbal behavior: Dimensions of style. , 1992 .

[19]  D. McNeill Hand and Mind: What Gestures Reveal about Thought , 1992 .

[20]  R. Plutchik The psychology and biology of emotion , 1994 .

[21]  Sumedha Kshirsagar,et al.  A multilayer personality model , 2002, SMARTGRAPH '02.

[22]  T. Dalgleish Basic Emotions , 2004 .

[23]  Catherine Pelachaud,et al.  Eye Communication in a Conversational 3D Synthetic Agent , 2000, AI Commun..

[24]  George N. Votsis,et al.  Emotion recognition in human-computer interaction , 2001, IEEE Signal Process. Mag..

[25]  Mark Steedman,et al.  APML, a Markup Language for Believable Behavior Generation , 2004, Life-like characters.

[26]  Hao Yan,et al.  Coordination and context-dependence in the generation of embodied conversation , 2000, INLG.

[27]  Oliviero Stock,et al.  Multimodal intelligent information presentation , 2005 .

[28]  Roddy Cowie,et al.  FEELTRACE: an instrument for recording perceived emotion in real time , 2000 .

[29]  H. Wallbott Bodily expression of emotion , 1998 .

[30]  K. Scherer,et al.  Appraisal processes in emotion: Theory, methods, research. , 2001 .

[31]  Costanza Navarretta,et al.  The MUMIN multimodal coding scheme , 2005 .

[32]  J. Cunningham,et al.  Children's decoding of emotion in expressive body movement: the development of cue attunement. , 1998, Developmental psychology.

[33]  P. Ekman,et al.  What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .

[34]  EmoTV 1 : Annotation of Real-life Emotions for the Specification of Multimodal Affective Interfaces , 2005 .

[35]  J. Lannoy,et al.  Gestures and Speech: Psychological Investigations , 1991 .

[36]  Laila Dybkjær,et al.  Affective Dialogue Systems , 2004, Lecture Notes in Computer Science.

[37]  Roddy Cowie,et al.  Emotional speech: Towards a new generation of databases , 2003, Speech Commun..

[38]  Jens Allwood,et al.  Cooperation and Flexibility in Multimodal Communication , 1998, Cooperative Multimodal Communication.

[39]  Catherine Pelachaud,et al.  Embodied contextual agent in information delivering application , 2002, AAMAS '02.