Combination of facial movements on a 3D talking head

Facial movements play an important role in interpreting spoken conversations and emotions. There are several types of movements, such as conversational signals, emotion displays, etc. We call these channels of facial movement. Realistic animation of these movements improves the realism, liveliness of the interaction between human and computers using embodied conversational agents. To date, no appropriate methods have been proposed for integrating all facial movements. We propose in this paper a scheme of combining facial movements on a 3D talking head. First, we concatenate the movements in the same channel to generate smooth transitions between adjacent movements. This combination only applies to individual muscles. The movements from all channels are then combined taking into account the resolution of possible conflicting muscles

[1]  Thierry Dutoit,et al.  The MBROLA project: towards a set of high quality speech synthesizers free of use for non commercial purposes , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[2]  Michael M. Cohen,et al.  Modeling Coarticulation in Synthetic Visual Speech , 1993 .

[3]  Anders Löfqvist,et al.  Speech as Audible Gestures , 1990 .

[4]  Mark Steedman,et al.  Generating Facial Expressions for Speech , 1996, Cogn. Sci..

[5]  Dirk Heylen,et al.  Improvements on a simple muscle-based 3D face for realistic facial expressions , 2003, Proceedings 11th IEEE International Workshop on Program Comprehension.

[6]  P. Kalra An interactive multimodal facial animation system , 1993 .

[7]  Scott A. King,et al.  An anatomically-based 3D parametric lip model to support facial animation and synchronized speech , 2000 .

[8]  Thomas S. Huang,et al.  Facial Expression Recognition from Video Sequences : Temporal and Static Modelling , 2002 .

[9]  Alex Pentland,et al.  A vision system for observing and extracting facial action parameters , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Léon J. M. Rothkrantz,et al.  Fuzzy-logical Implementation of Cooccurrence Rules for Combining AUS , 2003, Computer Graphics and Imaging.

[11]  P. Ekman,et al.  Facial Action Coding System: Manual , 1978 .

[12]  Dirk Heylen,et al.  Generation of Facial Expressions from Emotion Using a Fuzzy Rule Based System , 2001, Australian Joint Conference on Artificial Intelligence.

[13]  Jeffrey F. Cohn,et al.  Dynamics of facial expression: normative characteristics and individual differences , 2001, IEEE International Conference on Multimedia and Expo, 2001. ICME 2001..

[14]  P. Ekman The argument and evidence about universals in facial expressions of emotion. , 1989 .

[15]  Matthew Stone,et al.  Making discourse visible: coding and animating conversational facial displays , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[16]  Catherine Pelachaud,et al.  TALKING HEADS: Physical, Linguistic and Cognitive Issues in Facial Animation , 1995 .

[17]  Thomas S. Huang,et al.  Final Report To NSF of the Planning Workshop on Facial Expression Understanding , 1992 .

[18]  Irfan Essa,et al.  Analysis, interpretation and synthesis of facial expressions , 1995 .

[19]  Hans-Peter Seidel,et al.  "May I talk to you? : -) " - facial animation from text , 2002, 10th Pacific Conference on Computer Graphics and Applications, 2002. Proceedings..

[20]  P. Ekman Emotion in the human face , 1982 .

[21]  P. Ekman Expression and the Nature of Emotion , 1984 .

[22]  P. Ekman Unmasking The Face , 1975 .

[23]  Irene Albrecht,et al.  Automatic Generation of Non-Verbal Facial Expressions from Speech , 2002 .