Example-based facial rigging

We introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. Our system transfers controller semantics and expression dynamics from a generic template to the target blendshape model, while solving for an optimal reproduction of the training poses. This enables a scalable design process, where the user can iteratively add more training poses to refine the blendshape expression space. However, plausible animations can be obtained even with a single training pose. We show how formulating the optimization in gradient space yields superior results as compared to a direct optimization on blendshape vertices. We provide examples for both hand-crafted characters and 3D scans of a real actor and demonstrate the performance of our system in the context of markerless art-directable facial tracking.

[1]  Leonidas J. Guibas,et al.  Robust single-view geometry and motion reconstruction , 2009, ACM Trans. Graph..

[2]  Paul Debevec,et al.  The Digital Emily project: photoreal facial modeling and animation , 2009, SIGGRAPH '09.

[3]  Luc Van Gool,et al.  Face/Off: live facial puppetry , 2009, SCA '09.

[4]  Antonio Susín,et al.  Transferring the Rig and Animations from a Character to Different Face Models , 2008, Comput. Graph. Forum.

[5]  Yong Yu,et al.  Facial animation by optimized blendshapes from motion capture data , 2008, Comput. Animat. Virtual Worlds.

[6]  Ilya Baran,et al.  Automatic rigging and animation of 3D characters , 2007, ACM Trans. Graph..

[7]  Luc Van Gool,et al.  Fast 3D Scanning with Automatic Motion Compensation , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  John P. Lewis,et al.  Facial motion retargeting , 2006, SIGGRAPH Courses.

[9]  Hanspeter Pfister,et al.  Face transfer with multilinear models , 2005, ACM Trans. Graph..

[10]  Ronald Fedkiw,et al.  Automatic determination of facial muscle activations from sparse motion capture marker data , 2005, ACM Trans. Graph..

[11]  Steven M. Seitz,et al.  Spacetime faces , 2004, ACM Trans. Graph..

[12]  Jovan Popovic,et al.  Deformation transfer for triangle meshes , 2004, ACM Trans. Graph..

[13]  Tomaso A. Poggio,et al.  Reanimating Faces in Images and Video , 2003, Comput. Graph. Forum.

[14]  Jason Osipa Stop Staring: Facial Modeling and Animation Done Right , 2003 .

[15]  Hyeong-Seok Ko,et al.  Analysis and synthesis of facial expressions with hand-generated muscle actuation basis , 2001, Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596).

[16]  Jun-yong Noh,et al.  Expression cloning , 2001, SIGGRAPH.

[17]  Hans-Peter Seidel,et al.  Geometry-based Muscle Modeling for Facial Animation , 2001, Graphics Interface.

[18]  David Salesin,et al.  Synthesizing realistic facial expressions from photographs , 1998, SIGGRAPH.

[19]  Thomas F. Coleman,et al.  An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds , 1993, SIAM J. Optim..

[20]  Demetri Terzopoulos,et al.  Physically-based facial modelling, analysis, and animation , 1990, Comput. Animat. Virtual Worlds.

[21]  Daniel Thalmann,et al.  Joint-dependent local deformations for hand animation and object grasping , 1989 .

[22]  Keith Waters,et al.  A muscle model for animation three-dimensional facial expression , 1987, SIGGRAPH.

[23]  M. Pauly,et al.  Deformation Transfer for Detail-Preserving Surface Editing , 2006 .

[24]  Christoph Bregler,et al.  Analysis, Synthesis, and Retargeting of Facial Expressions , 2004 .

[25]  John Hart,et al.  ACM Transactions on Graphics , 2004, SIGGRAPH 2004.

[26]  Matthew Turk,et al.  A Morphable Model For The Synthesis Of 3D Faces , 1999, SIGGRAPH.

[27]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .