Modelling Facial Features for Emotion Recognition and Synthesis

ABSTRACT Emotion recognition and synthesis is one of the most important challenges for effective human–computer interaction. In this paper, a novel approach for emotion recognition is presented by modelling facial feature deformations. The presented approach is based on the fact that facial features such as lip, nose, eyes, and eyebrows get deformed due to variation in emotions. To measure the change in shapes of different facial features, landmark points are extracted around the facial features. Thin plate spline (TPS) is used to model the deformation of these landmark points. The basic property of TPS mapping function is that it is capable of computing rigid as well as non-rigid transformations between neutral and emotion image frames. The rigid transformation parameters represent affine parameters caused by head movement and non-rigid transformation parameters are used as representatives of facial feature deformation caused by emotion. To prove the modelling ability of TPS, non-rigid parameters are fed to support vector machine for emotion recognition. Moreover, an attempt is made to synthesize emotion by using TPS warping function. The mean of non-rigid transformation for an emotion is used as a template to warp the neutral image to emotion image. To evaluate the proposed approach, extended Cohn-Kanade database and JAFFE database are used and experimental results show 95% and 70% accuracy for them, respectively.

[1]  K. S. Venkatesh,et al.  Emotion recognition from geometric facial features using self-organizing map , 2014, Pattern Recognit..

[2]  N. Diederich,et al.  Primary vision and facial emotion recognition in early Parkinson's disease , 2014, Journal of the Neurological Sciences.

[3]  Timothy F. Cootes,et al.  Active Appearance Models , 1998, ECCV.

[4]  Neeru Rathee,et al.  Discriminative analysis of lip features for emotion recognition , 2017, Int. J. Comput. Vis. Robotics.

[5]  Stefanos Zafeiriou,et al.  From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  P. Ekman,et al.  Measuring facial movement , 1976 .

[7]  Mansour Sheikhan,et al.  Modular neural-SVM scheme for speech emotion recognition using ANOVA feature selection method , 2013, Neural Computing and Applications.

[8]  Mohan M. Trivedi,et al.  Pose invariant affect analysis using thin-plate splines , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..

[9]  Qiang Ji,et al.  Simultaneous Facial Feature Tracking and Facial Expression Recognition , 2013, IEEE Transactions on Image Processing.

[10]  Fred L. Bookstein,et al.  Principal Warps: Thin-Plate Splines and the Decomposition of Deformations , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Yongzhao Zhan,et al.  A neural-AdaBoost based facial expression recognition system , 2014, Expert Syst. Appl..

[12]  Hamed Shah-Hosseini,et al.  A novel fuzzy facial expression recognition system based on facial feature extraction from color face images , 2012, Eng. Appl. Artif. Intell..

[13]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[14]  Timothy F. Cootes,et al.  Active Shape Models-Their Training and Application , 1995, Comput. Vis. Image Underst..

[15]  A. Mehrabian Silent Messages: Implicit Communication of Emotions and Attitudes , 1971 .

[16]  M. Taner Eskil,et al.  Facial expression recognition based on anatomy , 2014, Comput. Vis. Image Underst..

[17]  Nicu Sebe,et al.  Authentic facial expression analysis , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[18]  Ioannis Pitas,et al.  An analysis of facial expression recognition under partial facial image occlusion , 2008, Image Vis. Comput..

[19]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[20]  Fabio Paternò,et al.  Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema , 2012, International Journal of Speech Technology.

[21]  C. Darwin The Expression of the Emotions in Man and Animals , .

[22]  Josef Bigün,et al.  Synergy of Lip-Motion and Acoustic Features in Biometric Speech and Speaker Recognition , 2007, IEEE Transactions on Computers.

[23]  Mir Kamal Mirnia,et al.  A New Algorithm to Classify Face Emotions through Eye and Lip Features by Using Particle Swarm Optimization , 2012 .

[24]  Simon Lucey,et al.  Deformable Model Fitting by Regularized Landmark Mean-Shift , 2010, International Journal of Computer Vision.

[25]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[26]  Oksam Chae,et al.  Local Directional Number Pattern for Face Analysis: Face and Expression Recognition , 2013, IEEE Transactions on Image Processing.

[27]  Neeru Rathee,et al.  A novel approach for pain intensity detection based on facial feature deformations , 2015, J. Vis. Commun. Image Represent..