Investigating Emotion Style in Human Faces and Avatars

This paper describes a computational study regarding the way real humans manifest their facial expressions and emotions and how other people perceive that when applied to virtual humans. We propose a new metric for measuring individuals' emotion style where subjects were recorded while expressing the six basic emotions (happiness, fear, disgust, anger, surprise and sadness). With this metric, we were able to group the subjects into four different clusters and provide evidence that shows a visual correlation between the groups and the video footage. After applying the styles in virtual humans, a survey was also applied to lay people in order to understand how emotion style is perceived and identified by the general public. The survey not only indicated that people are in fact able to perceive an individual's emotion style regardless of facial geometry, but also showed that there seems to be a particular style that is considered more sympathetic and approachable when compared to others.

[1]  Ahmed M. Elgammal,et al.  High Resolution Acquisition, Learning and Transfer of Dynamic 3‐D Facial Expressions , 2004, Comput. Graph. Forum.

[2]  Yisong Yue,et al.  A deep learning approach for generalized speech animation , 2017, ACM Trans. Graph..

[3]  Norman I. Badler,et al.  Creating Interactive Virtual Humans: Some Assembly Required , 2002, IEEE Intell. Syst..

[4]  Geoffrey E. Hinton,et al.  Factored conditional restricted Boltzmann Machines for modeling motion style , 2009, ICML '09.

[5]  Frédéric H. Pighin,et al.  Unsupervised learning for speech motion editing , 2003, SCA '03.

[6]  Soraia Raupp Musse,et al.  An Adaptive Methodology for Facial Expression Transfer , 2015, 2015 14th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames).

[7]  A. Elgammal,et al.  Separating style and content on a nonlinear manifold , 2004, CVPR 2004.

[8]  Christoph Bregler,et al.  Facial expression space learning , 2002, 10th Pacific Conference on Computer Graphics and Applications, 2002. Proceedings..

[9]  Lorenzo Torresani,et al.  Learning Motion Style Synthesis from Perceptual Observations , 2006, NIPS.

[10]  Zhigang Deng,et al.  Style learning and transferring for facial animation editing , 2009, SCA '09.

[11]  P. Ekman An argument for basic emotions , 1992 .

[12]  Anil K. Jain,et al.  A modified Hausdorff distance for object matching , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[13]  Katsushi Ikeuchi,et al.  Extraction of person-specific motion style based on a task model and imitation by humanoid robot , 2014, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  P. Ekman,et al.  EMFACS-7: Emotional Facial Action Coding System , 1983 .

[15]  Victor Lempitsky,et al.  Few-Shot Adversarial Learning of Realistic Neural Talking Head Models , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  Soraia Raupp Musse,et al.  Persona: A Method for Facial Analysis in Video and Application in Entertainment , 2018, CIE.

[17]  Gengdai Liu,et al.  Style subspaces for character animation , 2008 .