Psychophysical investigation of facial expressions using computer animated faces

The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.

[1]  H. Bülthoff,et al.  The contribution of different facial regions to the recognition of conversational expressions. , 2008, Journal of vision.

[2]  Keith Waters,et al.  A muscle model for animation three-dimensional facial expression , 1987, SIGGRAPH.

[3]  Mathieu Desbrun,et al.  Learning controls for blend shape based realistic facial animation , 2003, SIGGRAPH '03.

[4]  Keith Waters,et al.  Computer facial animation , 1996 .

[5]  Heinrich H. Bülthoff,et al.  Computational Modeling of Face Recognition Based on Psychophysical Experiments , 2004 .

[6]  J. N. Bassili Facial motion in the perception of faces and of emotional expression. , 1978 .

[7]  Matthew Turk,et al.  A Morphable Model For The Synthesis Of 3D Faces , 1999, SIGGRAPH.

[8]  Lance Williams,et al.  Performance-driven facial animation , 1990, SIGGRAPH Courses.

[9]  Ronald Fedkiw,et al.  Automatic determination of facial muscle activations from sparse motion capture marker data , 2005, SIGGRAPH '05.

[10]  Heinrich H. Bülthoff,et al.  Manipulating Video Sequences to Determine the Components of Conversational Facial Expressions , 2005, TAP.

[11]  Heinrich H. Bülthoff,et al.  Evaluating the perceptual realism of animated facial expressions , 2008, TAP.

[12]  Radoslaw Niewiadomski,et al.  Multimodal Complex Emotions: Gesture Expressivity and Blended Facial Expressions , 2006, Int. J. Humanoid Robotics.

[13]  Martin A. Giese,et al.  Semantic 3D motion retargeting for facial animation , 2006, APGV '06.

[14]  Keith Waters,et al.  Computer Facial Animation, Second Edition , 1996 .

[15]  Heinrich H. Bülthoff,et al.  Combining 3D Scans and Motion Capture for Realistic Facial Animation , 2003, Eurographics.

[16]  J. N. Bassili Facial motion in the perception of faces and of emotional expression. , 1978, Journal of experimental psychology. Human perception and performance.