Expressive avatars in MPEG-4

Man-machine interaction (MMI) systems that utilize multimodal information about users' current emotional state are presently at the forefront of interest of the computer vision and artificial intelligence communities. A lifelike avatar can enhance interactive applications. In this paper, we present the implementation of GretaEngine and synthesized expressions, including intermediate ones, based on MPEG-4 standard and Whissel's emotion representation.

[1]  P. Gallaher Individual differences in nonverbal behavior : dimensions of style , 1992 .

[2]  Maurizio Mancini,et al.  Formational parameters and adaptive prototype instantiation for MPEG-4 compliant gesture synthesis , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[3]  Françoise J. Prêteux,et al.  Advanced animation framework for virtual character within the MPEG-4 standard , 2002, Proceedings. International Conference on Image Processing.

[4]  Rudolf von Laban,et al.  Effort: economy in body movement , 1974 .

[5]  Catherine Pelachaud,et al.  Influences and embodied conversational agents , 2004, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..

[6]  P. Ekman Facial expression and emotion. , 1993, The American psychologist.

[7]  K. Scherer,et al.  Cues and channels in emotion recognition. , 1986 .

[8]  Kostas Karpouzis,et al.  Parameterized Facial Expression Synthesis Based on MPEG-4 , 2002, EURASIP J. Adv. Signal Process..

[9]  Cynthia Whissell,et al.  THE DICTIONARY OF AFFECT IN LANGUAGE , 1989 .