Control of Feature‐point‐driven Facial Animation Using a Hypothetical Face

A new approach to the generation of a feature‐point‐driven facial animation is presented. In the proposed approach, a hypothetical face is used to control the animation of a face model. The hypothetical face is constructed by connecting some predefined facial feature points to create a net so that each facet of the net is represented by a Coon’s surface. Deformation of the face model is controlled by changing the shape of the hypothetical face, which is performed by changing the locations of feature points and their tangents. Experimental results show that this hypothetical‐face‐based method can generate facial expressions which are visually almost identical to those of a real face.

[1]  Lance Williams,et al.  Performance-driven facial animation , 1990, SIGGRAPH.

[2]  Daniel Thalmann,et al.  Simulation of Facial Muscle Actions Based on Rational Free Form Deformations , 1992, Comput. Graph. Forum.

[3]  David Salesin,et al.  Synthesizing realistic facial expressions from photographs , 1998, SIGGRAPH.

[4]  Keith Waters,et al.  Computer facial animation , 1996 .

[5]  Thomas S. Huang,et al.  Bézier Volume Deformation Model for Facial Animation and Video Tracking , 1998, CAPTECH.

[6]  Léon J. M. Rothkrantz,et al.  A performance based parametric model for facial animation , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[7]  Demetri Terzopoulos,et al.  Realistic modeling for facial animation , 1995, SIGGRAPH.

[8]  Gerald Farin,et al.  Curves and surfaces for computer aided geometric design , 1990 .

[9]  Marc Levoy,et al.  Fitting smooth surfaces to dense polygon meshes , 1996, SIGGRAPH.