Creating a Photoreal Digital Actor: The Digital Emily Project

The Digital Emily Project is a collaboration between facial animation company Image Metrics and the Graphics Laboratory at the University of Southern California’s Institute for Creative Technologies to achieve one of the world’s first photorealistic digital facial performances. The project leverages latest-generation techniques in high-resolution face scanning, character rigging, video-based facial animation, and compositing. An actress was first filmed on a studio set speaking emotive lines of dialog in high definition. The lighting on the set was captured as a high dynamic range light probe image. The actress’ face was then three-dimensionally scanned in thirty-three facial expressions showing different emotions and mouth and eye movements using a high-resolution facial scanning process accurate to the level of skin pores and fine wrinkles. Lighting-independent diffuse and specular reflectance maps were also acquired as part of the scanning process. Correspondences between the 3D expression scans were formed using a semi-automatic process, allowing a blendshape facial animation rig to be constructed whose expressions closely mirrored the shapes observed in the rich set of facial scans; animated eyes and teeth were also added to the model. Skin texture detail showing dynamic wrinkling was converted into multiresolution displacement maps also driven by the blend shapes. A semi-automatic video-based facial animation system was then used to animate the 3D face rig to match the performance seen in the original video, and this performance was tracked onto the facial motion in the studio video. The final face was illuminated by the captured studio illumination and shading using the acquired reflectance maps with a skin translucency shading algorithm. Using this process, the project was able to render a synthetic facial performance which was generally accepted as being a real face.

[1]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[2]  Henrique S. Malvar,et al.  Making Faces , 2019, Topoi.

[3]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Paul E. Debevec,et al.  Acquiring the reflectance field of a human face , 2000, SIGGRAPH.

[5]  Steve Marschner,et al.  A practical model for subsurface light transport , 2001, SIGGRAPH.

[6]  Daniel Cohen-Or,et al.  Bilateral mesh denoising , 2003 .

[7]  Richard Bodek Beowulf , 2004, Encyclopedic Dictionary of Archaeology.

[8]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.

[9]  Hayden Landis,et al.  Production-Ready Global Illumination , 2004 .

[10]  Paul E. Debevec,et al.  Digitizing the Parthenon: Estimating Surface Reflectance Properties of a Complex Scene under Captured Natural Illumination , 2004, VMV.

[11]  Andrew Gardner,et al.  Animatable Facial Reflectance Fields , 2004 .

[12]  Mark Sagar Reflectance field rendering of human faces for "Spider-Man 2" , 2004, SIGGRAPH '04.

[13]  Realistic human face rendering for "The Matrix Reloaded" , 2003, SIGGRAPH Courses.

[14]  Andrew Gardner,et al.  Performance relighting and reflectance transformation with time-multiplexed illumination , 2005, ACM Trans. Graph..

[15]  Diego F. Nehab,et al.  Efficiently combining positions and normals for precise 3D geometry , 2005, SIGGRAPH 2005.

[16]  Lance Williams,et al.  Human face project , 2005, SIGGRAPH Courses.

[17]  Sen Wang,et al.  High resolution tracking of non-rigid 3D motion of densely sampled data using harmonic maps , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[18]  Pieter Peers,et al.  Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination , 2007 .

[19]  James R. Scott Whatʼs Old Is New Again , 2007 .

[20]  Paul Debevec Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography , 2008, SIGGRAPH Classes.

[21]  Pieter Peers,et al.  Facial performance synthesis using deformation-driven polynomial displacement maps , 2008, SIGGRAPH Asia '08.

[22]  P. Debevec,et al.  Practical modeling and acquisition of layered facial reflectance , 2008, SIGGRAPH Asia '08.

[23]  Wan-Chun Ma,et al.  A Framework for Capture and Synthesis of High Resolution Facial Geometry and Performance , 2008 .

[24]  The Digital Eye : Image Metrics Attempts to Leap the Uncanny Valley , 2011 .