Face animation based on observed 3D speech dynamics

Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.

[1]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[2]  Keith Waters,et al.  A coordinated muscle model for speech animation , 1995 .

[3]  Thaddeus Beier,et al.  Feature-based image metamorphosis , 1992, SIGGRAPH.

[4]  Jun-yong Noh,et al.  Talking faces , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[5]  David Salesin,et al.  Synthesizing realistic facial expressions from photographs , 1998, SIGGRAPH.

[6]  Christof Traber,et al.  SVOX: the implementation of a text-to-speech system for German , 1995 .

[7]  Thomas S. Huang,et al.  Explanation-based facial motion tracking using a piecewise Bezier volume deformation model , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[8]  Eric Vatikiotis-Bateson,et al.  The moving face during speech communication , 1998 .

[9]  Gérard Bailly,et al.  MOTHER: a new generation of talking heads providing a flexible articulatory control for video-realistic speech animation , 2000, INTERSPEECH.

[10]  V. Rich Personal communication , 1989, Nature.

[11]  E. Owens,et al.  Visemes observed by hearing-impaired and normal-hearing adult viewers. , 1985, Journal of speech and hearing research.

[12]  Matthew Turk,et al.  A Morphable Model For The Synthesis Of 3D Faces , 1999, SIGGRAPH.

[13]  Stephen M. Omohundro,et al.  Nonlinear Image Interpolation using Manifold Learning , 1994, NIPS.

[14]  Matthew Brand,et al.  Voice puppetry , 1999, SIGGRAPH.

[15]  D. Massaro,et al.  Perceiving Talking Faces , 1995 .

[16]  A. Montgomery,et al.  Physical characteristics of the lips underlying vowel lipreading performance. , 1983, The Journal of the Acoustical Society of America.

[17]  Scott A. King,et al.  An anatomically-based 3D parametric lip model to support facial animation and synchronized speech , 2000 .

[18]  Tony Ezzat,et al.  Visual Speech Synthesis by Morphing Visemes , 2000, International Journal of Computer Vision.

[19]  Christoph Bregler,et al.  Video Rewrite: Driving Visual Speech with Audio , 1997, SIGGRAPH.

[20]  David Banks,et al.  Interactive shape metamorphosis , 1995, I3D '95.