Generating animatable 3D virtual faces from scan data

In this paper a new adaptation-based approach is presented to reconstruct animatable facial models of individual people from scan data with minimum user intervention. A generic control model that represents both the face shape and layered biomechanical structure serves as the starting point for our face adaptation algorithm. After a minimum set of anthropometric landmarks have been specified on the 2D images, the algorithm automatically recovers their 3D positions on the face surface using a projection-mapping approach. Based on a series of measurements between the 3D landmarks, a global adaptation is carried out to align the generic control model to the measured surface data using affine transformations. A local adaptation then deforms the geometry of the generic model to fit all of its vertices to the scanned surface. The reconstructed model accurately represents the shape of the individual face and can synthesize various expressions using transferred muscle actuators. Key features of our method are near-automated reconstruction process, no restriction on the position and orientation of the generic model and scanned surface, and efficient framework to animate any human data-set.

[1]  Demetri Terzopoulos,et al.  Realistic modeling for facial animation , 1995, SIGGRAPH.

[2]  Takaaki Akimoto,et al.  Automatic creation of 3D facial models , 1993, IEEE Computer Graphics and Applications.

[3]  Parke,et al.  Parameterized Models for Facial Animation , 1982, IEEE Computer Graphics and Applications.

[4]  Paolo Cignoni,et al.  A comparison of mesh simplification algorithms , 1998, Comput. Graph..

[5]  Ulrich Neumann,et al.  SYNTHESIS OF 3D FACES , 1999 .

[6]  Hiroshi Harashima,et al.  Analysis and synthesis of facial expression using high-definition wire frame model , 1993, Proceedings of 1993 2nd IEEE International Workshop on Robot and Human Communication.

[7]  Yu Zhang,et al.  Efficient Modeling of An Anatomy‐Based Face and Fast 3D Facial Expression Synthesis , 2003, Comput. Graph. Forum.

[8]  Tsuneya Kurihara,et al.  A Transformation Method for Modeling and Animation of the Human Face from Photographs , 1991 .

[9]  Nadia Magnenat-Thalmann,et al.  Fast head modeling for animation , 2000, Image Vis. Comput..

[10]  Luc Van Gool,et al.  Reading between the lines—a method for extracting dynamic 3D with texture , 1997, VRST '97.

[11]  Hans-Peter Seidel,et al.  Head shop: generating animated head models with anatomical structure , 2002, SCA '02.

[12]  Paolo Cignoni,et al.  Metro: Measuring Error on Simplified Surfaces , 1998, Comput. Graph. Forum.

[13]  Lijun Yin,et al.  Constructing a 3D individualized head model from two orthogonal views , 1996, The Visual Computer.

[14]  F. Ulgen,et al.  A step towards universal facial animation via volume morphing , 1997, Proceedings 6th IEEE International Workshop on Robot and Human Communication. RO-MAN'97 SENDAI.

[15]  Pascal Fua,et al.  From Multiple Stereo Views to Multiple 3-D Surfaces , 1997, International Journal of Computer Vision.

[16]  Steve DiPaola,et al.  Extending the range of facial types , 1991, Comput. Animat. Virtual Worlds.