Perception of linear and nonlinear motion properties using a FACS validated 3D facial model

In this paper we present the first Facial Action Coding System (FACS) valid model to be based on dynamic 3D scans of human faces for use in graphics and psychological research. The model consists of FACS Action Unit (AU) based parameters and has been independently validated by FACS experts. Using this model, we explore the perceptual differences between linear facial motions -- represented by a linear blend shape approach -- and real facial motions that have been synthesized through the 3D facial model. Through numerical measures and visualizations, we show that this latter type of motion is geometrically nonlinear in terms of its vertices. In experiments, we explore the perceptual benefits of nonlinear motion for different AUs. Our results are insightful for designers of animation systems both in the entertainment industry and in scientific research. They reveal a significant overall benefit to using captured nonlinear geometric vertex motion over linear blend shape motion. However, our findings suggest that not all motions need to be animated nonlinearly. The advantage may depend on the type of facial action being produced and the phase of the movement.

[1]  Ralph R. Martin,et al.  Fast and Effective Feature-Preserving Mesh Denoising , 2007, IEEE Transactions on Visualization and Computer Graphics.

[2]  Heinrich H. Bülthoff,et al.  Evaluating the perceptual realism of animated facial expressions , 2008, TAP.

[3]  K. Scherer,et al.  FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units , 2011 .

[4]  H. Bülthoff,et al.  The use of facial motion and facial form during the processing of identity , 2003, Vision Research.

[5]  Marc M. Van Hulle,et al.  A phase-based approach to the estimation of the optical flow field using spatial filtering , 2002, IEEE Trans. Neural Networks.

[6]  Thomas Vetter,et al.  A morphable model for the synthesis of 3D faces , 1999, SIGGRAPH.

[7]  Heinrich H. Bülthoff,et al.  Psychophysical investigation of facial expressions using computer animated faces , 2007, APGV.

[8]  Pieter Peers,et al.  Facial performance synthesis using deformation-driven polynomial displacement maps , 2008, SIGGRAPH 2008.

[9]  K. Berridge,et al.  Unconscious Emotion , 2004 .

[10]  Wojciech Matusik,et al.  Multi-scale capture of facial geometry and motion , 2007, ACM Trans. Graph..

[11]  Paul L. Rosin,et al.  Facial dynamics as indicators of trustworthiness and cooperative behavior. , 2007, Emotion.

[12]  Li Zhang,et al.  Spacetime faces: high resolution capture for modeling and animation , 2004, SIGGRAPH 2004.

[13]  Ezequiel Morsella,et al.  The Unconscious Mind , 2008, Perspectives on psychological science : a journal of the Association for Psychological Science.

[14]  Martin A. Giese,et al.  Semantic 3D motion retargeting for facial animation , 2006, APGV '06.

[15]  Mark Sagar Facial performance capture and expressive translation for King Kong , 2006, SIGGRAPH '06.

[16]  Tim Weyrich,et al.  Analysis of human faces using a measurement-based skin reflectance model , 2006, ACM Trans. Graph..

[17]  Lance Williams,et al.  Performance-driven facial animation , 1990, SIGGRAPH Courses.

[18]  A. O'Toole,et al.  Three-Dimensional Information in Face Representations Revealed by Identity Aftereffects , 2009, Psychological science.

[19]  Jeffrey R. Spies,et al.  Mapping and Manipulating Facial Expression , 2009, Language and speech.

[20]  Lance Williams,et al.  Performance-driven facial animation , 1990, SIGGRAPH.

[21]  Heinrich H. Bülthoff,et al.  Facial Animation Based on 3D Scans and Motion Capture , 2003, SIGGRAPH 2003.

[22]  Matthew Turk,et al.  A Morphable Model For The Synthesis Of 3D Faces , 1999, SIGGRAPH.

[23]  C. Wallraven,et al.  Dynamic information for the recognition of conversational expressions. , 2009, Journal of vision.

[24]  Keith Waters,et al.  Computer facial animation , 1996 .

[25]  M. Gross,et al.  Analysis of human faces using a measurement-based skin reflectance model , 2006, ACM Trans. Graph..

[26]  Pieter Peers,et al.  Facial performance synthesis using deformation-driven polynomial displacement maps , 2008, SIGGRAPH Asia '08.

[27]  M. Otaduy,et al.  Multi-scale capture of facial geometry and motion , 2007, ACM Trans. Graph..