Geometry vs. Appearance for Discriminating between Posed and Spontaneous Emotions

Spontaneous facial expressions differ from posed ones in appearance, timing and accompanying head movements. Still images cannot provide timing or head movement information directly. However, indirectly the distances between key points on a face extracted from a still image using active shape models can capture some movement and pose changes. This information is superposed on information about non-rigid facial movement that is also part of the expression. Does geometric information improve the discrimination between spontaneous and posed facial expressions arising from discrete emotions? We investigate the performance of a machine vision system for discrimination between posed and spontaneous versions of six basic emotions that uses SIFT appearance based features and FAP geometric features. Experimental results on the NVIE database demonstrate that fusion of geometric information leads only to marginal improvement over appearance features. Using fusion features, surprise is the easiest emotion (83.4% accuracy) to be distinguished, while disgust is the most difficult (76.1%). Our results find different important facial regions between discriminating posed versus spontaneous version of one emotion and classifying the same emotion versus other emotions. The distribution of the selected SIFT features shows that mouth is more important for sadness, while nose is more important for surprise, however, both the nose and mouth are important for disgust, fear, and happiness. Eyebrows, eyes, nose and mouth are important for anger.

[1]  Igor S. Pandzic,et al.  MPEG-4 Facial Animation , 2002 .

[2]  Zahir M. Hussain,et al.  Automatic facial expression recognition: feature extraction and selection , 2010, Signal, Image and Video Processing.

[3]  Fei Chen,et al.  A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference , 2010, IEEE Transactions on Multimedia.

[4]  Alberto Del Bimbo,et al.  A Set of Selected SIFT Features for 3D Facial Expression Recognition , 2010, 2010 20th International Conference on Pattern Recognition.

[5]  Hatice Gunes,et al.  How to distinguish posed from spontaneous smiles using geometric features , 2007, ICMI '07.

[6]  J. Cohn,et al.  Movement Differences between Deliberate and Spontaneous Facial Expressions: Zygomaticus Major Action in Smiling , 2006, Journal of nonverbal behavior.

[7]  Gwen Littlewort,et al.  Automatic coding of facial expressions displayed during posed and genuine pain , 2009, Image Vis. Comput..

[8]  Lijun Yin,et al.  Multi-view facial expression recognition , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[9]  Maja Pantic,et al.  Spontaneous vs. posed facial behavior: automatic analysis of brow actions , 2006, ICMI '06.

[10]  Gwen Littlewort,et al.  Data Mining Spontaneous Facial Behavior with Automatic Expression Coding , 2008, COST 2102 Workshop.

[11]  Algirdas Pakstas,et al.  MPEG-4 Facial Animation: The Standard,Implementation and Applications , 2002 .

[12]  Timothy F. Cootes,et al.  Active Shape Models-Their Training and Application , 1995, Comput. Vis. Image Underst..

[13]  Jeffrey F. Cohn,et al.  The Timing of Facial Motion in posed and Spontaneous Smiles , 2003, Int. J. Wavelets Multiresolution Inf. Process..

[14]  Fuhui Long,et al.  Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy , 2003, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[16]  Thomas S. Huang,et al.  3D facial expression recognition based on automatically selected features , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[17]  Albert Ali Salah,et al.  Eyes do not lie: spontaneous versus posed smiles , 2010, ACM Multimedia.

[18]  Karen L. Schmidt,et al.  Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises , 2009, Journal of nonverbal behavior.

[19]  MengChu Zhou,et al.  Image Ratio Features for Facial Expression Recognition Application , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).