Model for predicting perception of facial action unit activation using virtual humans

Abstract Blendshape facial rigs are used extensively in the industry for facial animation of virtual humans. However, storing and manipulating large numbers of facial meshes (blendshapes) is costly in terms of memory and computation for gaming applications. Blendshape rigs are comprised of sets of semantically-meaningful expressions, which govern how expressive the character will be, often based on Action Units from the Facial Action Coding System (FACS). However, the relative perceptual importance of blendshapes has not yet been investigated. Research in Psychology and Neuroscience has shown that our brains process faces differently than other objects so we postulate that the perception of facial expressions will be feature-dependent rather than based purely on the amount of movement required to make the expression. Therefore, we believe that perception of blendshape visibility will not be reliably predicted by numerical calculations of the difference between the expression and the neutral mesh. In this paper, we explore the noticeability of blendshapes under different activation levels, and present new perceptually-based models to predict perceptual importance of blendshapes. The models predict visibility based on commonly-used geometry and image-based metrics.

[1]  H. Bülthoff,et al.  The contribution of different facial regions to the recognition of conversational expressions. , 2008, Journal of vision.

[2]  Sunil Kumar,et al.  Multi-level uncorrelated discriminative shared Gaussian process for multi-view facial expression recognition , 2020, The Visual Computer.

[3]  Ping Liu,et al.  Improving Speech Related Facial Action Unit Recognition by Audiovisual Information Fusion , 2017, IEEE Transactions on Cybernetics.

[4]  D. S. Lindsay,et al.  Other-race face perception. , 1991, The Journal of applied psychology.

[5]  Emmanuelle Gouillart,et al.  scikit-image: image processing in Python , 2014, PeerJ.

[6]  M. Coltheart,et al.  Photographs of facial expression: Accuracy, response times, and ratings of intensity , 2004, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[7]  Mark Pauly,et al.  Example-based facial rigging , 2010, SIGGRAPH 2010.

[8]  Corrado Caudek,et al.  Anger superiority effect: The importance of dynamic emotional facial expressions , 2013 .

[9]  A. Young,et al.  Understanding face recognition. , 1986, British journal of psychology.

[10]  Ken-ichi Anjyo,et al.  Practice and Theory of Blendshape Facial Models , 2014, Eurographics.

[11]  Jonas Beskow,et al.  Reverse Engineering Psychologically Valid Facial Expressions of Emotion into Social Robots , 2018, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).

[12]  Václav Skala,et al.  A Perception Correlated Comparison Method for Dynamic Meshes , 2011, IEEE Transactions on Visualization and Computer Graphics.

[13]  G. Alpers,et al.  Happy mouth and sad eyes: scanning emotional facial expressions. , 2011, Emotion.

[14]  P. Sinha,et al.  The Role of Eyebrows in Face Recognition , 2003, Perception.

[15]  Jovan Popović,et al.  Deformation transfer for triangle meshes , 2004, SIGGRAPH 2004.

[16]  L. Nummenmaa,et al.  Detection of emotional faces: salient physical features guide effective visual search. , 2008, Journal of experimental psychology. General.

[17]  M. Farah,et al.  What is "special" about face perception? , 1998, Psychological review.

[18]  Klaus R. Scherer,et al.  Subtly Different Positive Emotions Can Be Distinguished by Their Facial Expressions , 2011 .

[19]  H. K. Wong,et al.  The Own-Race Bias for Face Recognition in a Multiracial Society , 2020, Frontiers in Psychology.

[20]  Antony S R Manstead,et al.  Gender and culture differences in emotion. , 2004, Emotion.

[21]  Short article: Why mix-ups don't happen in the nursery: Evidence for an experience-based interpretation of the other-age effect , 2009, Quarterly journal of experimental psychology.

[22]  Ralph Adolphs,et al.  Perception and Emotion , 2006 .

[23]  Anton Leuski,et al.  Ada and Grace: Toward Realistic and Engaging Virtual Museum Guides , 2010, IVA.

[24]  Benjamin Balas,et al.  The role of face shape and pigmentation in other-race face perception: An electrophysiological study , 2010, Neuropsychologia.

[25]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[26]  Ludovic Hoyet,et al.  A Preliminary Investigation into the Impact of Training for Example-Based Facial Blendshape Creation , 2018, Eurographics.

[27]  Emma Carrigan,et al.  Improving blendshape performance for crowds with GPU and GPGPU techniques , 2016, MIG.

[28]  A. Dobson An introduction to generalized linear models , 1990 .

[29]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[30]  Laurent Itti,et al.  Realistic avatar eye and head animation using a neurobiological model of visual attention , 2004, SPIE Optics + Photonics.

[31]  C. Wallraven,et al.  Processing of facial identity and expression: a psychophysical, physiological, and computational perspective. , 2006, Progress in brain research.

[32]  D. Keltner,et al.  The Gender Stereotyping of Emotions , 2000 .

[33]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[34]  U. Hess,et al.  The Intensity of Emotional Facial Expressions and Decoding Accuracy , 1997 .

[35]  Wan-Chun Ma,et al.  Semantically-aware blendshape rigs from facial performance measurements , 2016, SIGGRAPH Asia Technical Briefs.

[36]  Kazunori Morikawa,et al.  Eye Shape Illusions Induced by Eyebrow Positions , 2015, Perception.

[37]  Martin Wegrzyn,et al.  Mapping the emotional face. How individual face parts contribute to successful emotion recognition , 2017, PloS one.

[38]  E. Carrigan,et al.  Expression Packing: As‐Few‐As‐Possible Training Expressions for Blendshape Transfer , 2020, Comput. Graph. Forum.

[39]  N. Kanwisher,et al.  The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception , 1997, The Journal of Neuroscience.

[40]  Julie D. Golomb,et al.  A Neural Basis of Facial Action Recognition in Humans , 2016, The Journal of Neuroscience.

[41]  J. Cohn,et al.  Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding. , 1999, Psychophysiology.

[42]  Daniel P. Kennedy,et al.  Perception of emotions from facial expressions in high-functioning adults with autism , 2012, Neuropsychologia.

[43]  J. Hietanen,et al.  Positive facial expressions are recognized faster than negative facial expressions, but why? , 2004, Psychological research.

[44]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[45]  Jianfei Cai,et al.  Facial Action Unit Detection Using Attention and Relation Learning , 2018, IEEE Transactions on Affective Computing.

[46]  Hui Yu,et al.  Perception-driven facial expression synthesis , 2012, Comput. Graph..

[47]  P. Ekman,et al.  Facial action coding system , 2019 .

[48]  Michael Garland,et al.  Surface simplification using quadric error metrics , 1997, SIGGRAPH.

[49]  Libor Vása,et al.  Perceptual Metrics for Static and Dynamic Triangle Meshes , 2013, Eurographics.

[50]  Christian Wallraven,et al.  Two Routes to Face Perception: Evidence From Psychophysics and Computational Modeling , 2009, Cogn. Sci..

[51]  Reginald B. Adams,et al.  Facial appearance, gender, and emotion expression. , 2004, Emotion.

[52]  Michael S. Landy,et al.  Face perception: A brief journey through recent discoveries and current directions , 2019, Vision Research.

[53]  Garrison W. Cottrell,et al.  Transmitting and Decoding Facial Expressions , 2005, Psychological science.

[54]  James W Tanaka,et al.  An Encoding Advantage for Own-Race versus Other-Race Faces , 2003, Perception.

[55]  Rozenn Dahyot,et al.  Investigating perceptually based models to predict importance of facial blendshapes , 2020, MIG.