An empirical rig for jaw animation

In computer graphics the motion of the jaw is commonly modelled by up-down and left-right rotation around a fixed pivot plus a forward-backward translation, yielding a three dimensional rig that is highly suited for intuitive artistic control. The anatomical motion of the jaw is, however, much more complex since the joints that connect the jaw to the skull exhibit both rotational and translational components. In reality the jaw does not move in a three dimensional subspace but on a constrained manifold in six dimensions. We analyze this manifold in the context of computer animation and show how the manifold can be parameterized with three degrees of freedom, providing a novel jaw rig that preserves the intuitive control while providing more accurate jaw positioning. The chosen parameterization furthermore places anatomically correct limits on the motion, preventing the rig from entering physiologically infeasible poses. Our new jaw rig is empirically designed from accurate capture data, and we provide a simple method to retarget the rig to new characters, both human and fantasy.

[1]  J. Salzmann Studies in the mobility of the human mandible , 1953 .

[2]  U. Posselt Range of movement of the mandible. , 1958, Journal of the American Dental Association.

[3]  U. Posselt,et al.  Range of movement of the mandiblew , 1958 .

[4]  B. L. Richardson,et al.  Study of Mandibular Motion in Six Degrees of Freedom , 1970, Journal of dental research.

[5]  J. Gower Generalized procrustes analysis , 1975 .

[6]  J. Okeson Management of Temporomandibular Disorders and Occlusion , 1989 .

[7]  E. Vatikiotis-Bateson,et al.  An analysis of the dimensionality of jaw motion in speech , 1995 .

[8]  David J. Ostry,et al.  The control of multi-muscle systems: human jaw and hyoid movements , 1996, Biological Cybernetics.

[9]  D J Ostry,et al.  An examination of the degrees of freedom of human jaw motion in speech and mastication. , 1997, Journal of speech, language, and hearing research : JSLHR.

[10]  Thomas Vetter,et al.  A morphable model for the synthesis of 3D faces , 1999, SIGGRAPH.

[11]  E. Vatikiotis-Bateson,et al.  Analysis and modeling of 3D jaw motion in speech and mastication , 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028).

[12]  Hans-Peter Seidel,et al.  Reanimating the dead: reconstruction of expressive faces from skull data , 2003, ACM Trans. Graph..

[13]  Weiliang Xu,et al.  Jaw mechanism modeling and simulation , 2005 .

[14]  V. Ferrario,et al.  Quantification of translational and gliding components in human temporomandibular joint during mouth opening. , 2005, Archives of oral biology.

[15]  Ronald Fedkiw,et al.  Automatic determination of facial muscle activations from sparse motion capture marker data , 2005, ACM Trans. Graph..

[16]  Hanspeter Pfister,et al.  Face transfer with multilinear models , 2005, ACM Trans. Graph..

[17]  Anderson Maciel,et al.  A model to simulate the mastication motion at the temporomandibular joint , 2005, SPIE Medical Imaging.

[18]  Hanspeter Pfister,et al.  Face transfer with multilinear models , 2005, SIGGRAPH 2005.

[19]  D. Vandermeulen,et al.  Large-scale in-vivo Caucasian facial soft tissue thickness database for craniofacial reconstruction. , 2006, Forensic science international.

[20]  W. Buford,et al.  Geometry-based algorithm for the prediction of nonpathologic mandibular movement. , 2007, Journal of oral and maxillofacial surgery : official journal of the American Association of Oral and Maxillofacial Surgeons.

[21]  Olga Sorkine-Hornung,et al.  On Linear Variational Surface Deformation Methods , 2008, IEEE Transactions on Visualization and Computer Graphics.

[22]  K. Nishigawa,et al.  Current status of researches on jaw movement and occlusion for clinical application , 2009 .

[23]  V. Ferrario,et al.  Translation and rotation movements of the mandible during mouth opening and closing , 2009, Clinical anatomy.

[24]  Thabo Beeler,et al.  High-quality single-shot capture of facial geometry , 2010, SIGGRAPH 2010.

[25]  Derek Bradley,et al.  High resolution passive facial performance capture , 2010, SIGGRAPH 2010.

[26]  Paul A. Beardsley,et al.  High-quality passive facial performance capture using anchor frames , 2011, SIGGRAPH 2011.

[27]  Mark Pauly,et al.  Realtime performance-based facial animation , 2011, ACM Trans. Graph..

[28]  Hao Li,et al.  Realtime performance-based facial animation , 2011, ACM Trans. Graph..

[29]  Wan-Chun Ma,et al.  Comprehensive Facial Performance Capture , 2011, Comput. Graph. Forum.

[30]  Hans-Peter Seidel,et al.  Lightweight binocular facial performance capture under uncontrolled lighting , 2012, ACM Trans. Graph..

[31]  Luciana Porcher Nedel,et al.  Simulation of the human TMJ behavior based on interdependent joints topology , 2012, Comput. Methods Programs Biomed..

[32]  Verónica Orvalho,et al.  A Facial Rigging Survey , 2012, Eurographics.

[33]  Paul A. Beardsley,et al.  Coupled 3D reconstruction of sparse facial hair and skin , 2012, ACM Trans. Graph..

[34]  Yangang Wang,et al.  Online modeling for realtime facial animation , 2013, ACM Trans. Graph..

[35]  Christian Theobalt,et al.  Reconstructing detailed dynamic face geometry from monocular video , 2013, ACM Trans. Graph..

[36]  Jihun Yu,et al.  Realtime facial animation with on-the-fly correctives , 2013, ACM Trans. Graph..

[37]  Derek Bradley,et al.  Rigid stabilization of facial expressions , 2014, ACM Trans. Graph..

[38]  Francisco José Madrid-Cuevas,et al.  Automatic generation and detection of highly reliable fiducial markers under occlusion , 2014, Pattern Recognit..

[39]  Andrew Jones,et al.  Driving High-Resolution Facial Scans with Video Performance Capture , 2014, ACM Trans. Graph..

[40]  Derek Bradley,et al.  High-quality capture of eyes , 2014, ACM Trans. Graph..

[41]  Kun Zhou,et al.  Displaced dynamic expression regression for real-time facial tracking and animation , 2014, ACM Trans. Graph..

[42]  Ken-ichi Anjyo,et al.  Practice and Theory of Blendshape Facial Models , 2014, Eurographics.

[43]  Ira Kemelmacher-Shlizerman,et al.  Total Moving Face Reconstruction , 2014, ECCV.

[44]  Xin Tong,et al.  Automatic acquisition of high-fidelity facial performances using monocular videos , 2014, ACM Trans. Graph..

[45]  Yiying Tong,et al.  FaceWarehouse: A 3D Facial Expression Database for Visual Computing , 2014, IEEE Transactions on Visualization and Computer Graphics.

[46]  Derek Bradley,et al.  Detailed spatio-temporal reconstruction of eyelids , 2015, ACM Trans. Graph..

[47]  Ronald Fedkiw,et al.  Fully automatic generation of anatomical face simulation models , 2015, Symposium on Computer Animation.

[48]  Justus Thies,et al.  Real-time expression transfer for facial reenactment , 2015, ACM Trans. Graph..

[49]  Jihun Yu,et al.  Unconstrained realtime facial performance capture , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[50]  M. O. Ahlers,et al.  Motion analysis of the mandible: guidelines for standardized analysis of computer-assisted recording of condylar movements. , 2015, International journal of computerized dentistry.

[51]  Thabo Beeler,et al.  Real-time high-fidelity facial performance capture , 2015, ACM Trans. Graph..

[52]  Christian Theobalt,et al.  Reconstruction of Personalized 3D Face Rigs from Monocular Video , 2016, ACM Trans. Graph..

[53]  Ronald Fedkiw,et al.  Art-directed muscle simulation for high-end facial animation , 2016, Symposium on Computer Animation.

[54]  Merlin Nimier-David,et al.  Building and animating user-specific volumetric face rigs , 2016, Symposium on Computer Animation.

[55]  Derek Bradley,et al.  Lightweight eye capture using a parametric model , 2016, ACM Trans. Graph..

[56]  Derek Bradley,et al.  Model-based teeth reconstruction , 2016, ACM Trans. Graph..

[57]  Francisco José Madrid-Cuevas,et al.  Generation of fiducial marker dictionaries using Mixed Integer Linear Programming , 2016, Pattern Recognit..

[58]  Derek Bradley,et al.  An anatomically-constrained local deformation model for monocular face capture , 2016, ACM Trans. Graph..

[59]  Justus Thies,et al.  Face2Face: Real-Time Face Capture and Reenactment of RGB Videos , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[60]  Patrick Pérez,et al.  Corrective 3D reconstruction of lips from monocular video , 2016, ACM Trans. Graph..

[61]  Jaakko Lehtinen,et al.  Production-level facial performance capture using deep convolutional neural networks , 2016, Symposium on Computer Animation.

[62]  Wenhuan Lu,et al.  Acoustic VR in the mouth: A real-time speech-driven visual tongue system , 2017, 2017 IEEE Virtual Reality (VR).

[63]  Phace , 2017 .

[64]  Derek Bradley,et al.  Enriching Facial Blendshape Rigs with Physical Simulation , 2017, Comput. Graph. Forum.

[65]  Pablo Garrido,et al.  MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction , 2017, ICCV.

[66]  Andrew Jones,et al.  Multi‐View Stereo on Consistent Face Topology , 2017, Comput. Graph. Forum.

[67]  Ming-Hsuan Yang,et al.  DodecaPen: Accurate 6DoF Tracking of a Passive Stylus , 2017, UIST.

[68]  Jaakko Lehtinen,et al.  Audio-driven facial animation by joint end-to-end learning of pose and emotion , 2017, ACM Trans. Graph..

[69]  Oscar Cordón,et al.  Genetic algorithms for skull-face overlay including mandible articulation , 2017, Inf. Sci..

[70]  Mark Pauly,et al.  Phace: physics-based face modeling and animation , 2017, ACM Trans. Graph..