Person-Specific Joy Expression Synthesis with Geometric Method

Smiling has a psychiatric effect in emotional state and may hold tremendous potential for clinical remediation in psychiatric disorders. A few researchers in image synthesis work on acting on the emotional state of subjects by automatically deforming their faces to synthesize joyful expression. However, to generate these expressions they apply the same deformation for the subjects while each person smiles differently. In this paper, we head towards a personalized synthesis of the joy expression. We have studied the trajectories of the face landmarks during a smile on the CK, Oulu-CASIA and MMI databases. The obtained results show that the smile is personal, straight and it occurs in a different way for each person. That is why we propose a system that can photo-realistically transform in real-time a detected face into a personalized joyful one using a geometric method. Both visual fidelity and a statistical study demonstrate that our person-specific method can generate personalized joy expressions closer to the ground truth than two non-personal state-of-the-art approaches.

[1]  Scott Schaefer,et al.  Image deformation using moving least squares , 2006, ACM Trans. Graph..

[2]  Justus Thies,et al.  Face2Face: real-time face capture and reenactment of RGB videos , 2019, Commun. ACM.

[3]  David T. Neal,et al.  Embodied Emotion Perception , 2011 .

[4]  Takuji Narumi,et al.  FaceShare: Mirroring with Pseudo-Smile Enriches Video Chat Communications , 2017, CHI.

[5]  Mita Nasipuri,et al.  Facial component-based blended facial expressions generation from static neutral face images , 2017, Multimedia Tools and Applications.

[6]  Takuji Narumi,et al.  Manipulation of an emotional experience by real-time deformed facial feedback , 2013, AH.

[7]  Sarah D. Pressman,et al.  Grin and Bear It , 2012, Psychological science.

[8]  Rama Chellappa,et al.  ExprGAN: Facial Expression Editing with Controllable Expression Intensity , 2017, AAAI.

[9]  Matti Pietikäinen,et al.  Facial expression recognition from near-infrared videos , 2011, Image Vis. Comput..

[10]  Nicolas Stoiber,et al.  Modeling emotionnal facial expressions and their dynamics for realistic interactive facial animation on virtual characters. (Modélisation des expressions faciales émotionnelles et de leurs dynamiques pour l'animation réaliste et interactive de personnages virtuels) , 2010 .

[11]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[12]  John T. Cacioppo,et al.  Emotional Contagion by Elaine Hatfield , 1993 .

[13]  Wei Wei,et al.  Fast DCT based expression extraction and lifelike transferring for facial images , 2016, 2016 IEEE 13th International Conference on Signal Processing (ICSP).

[14]  M. Pantic,et al.  Induced Disgust , Happiness and Surprise : an Addition to the MMI Facial Expression Database , 2010 .

[15]  M. Mermillod,et al.  Is eye contact the key to the social brain? , 2010, Behavioral and Brain Sciences.

[16]  M. Cabanac What is emotion? , 2002, Behavioural Processes.

[17]  Axel Röbel,et al.  Realistic Transformation of Facial and Vocal Smiles in Real-Time Audiovisual Streams , 2018, IEEE Transactions on Affective Computing.

[18]  Yunhong Wang,et al.  Facial Expression Synthesis by U-Net Conditional Generative Adversarial Networks , 2018, ICMR.

[19]  Ying Chen,et al.  Piecewise affine warp based frontal face synthesizing and application on face recognition , 2017, 2017 29th Chinese Control And Decision Conference (CCDC).

[20]  Ruimin Hu,et al.  A Novel Frontal Facial Synthesis Algorithm Based on Individual Residual Face , 2018, MMM.

[21]  Hui Chen,et al.  Geometry-Contrastive Generative Adversarial Network for Facial Expression Synthesis , 2018, ArXiv.