Biologically vs. Logic Inspired Encoding of Facial Actions and Emotions in Video

Automatic facial expression analysis is an important aspect of human machine interaction as the face is an important communicative medium. We use our face to signal interest, disagreement, intentions or mood through subtle facial motions and expressions. Work on automatic facial expression analysis can roughly be divided into the recognition of prototypic facial expressions such as the six basic emotional states and the recognition of atomic facial muscle actions (action units, AUs). Detection of AUs rather than emotions makes facial expression detection independent of culture-dependent interpretation, reduces the dimensionality of the problem and reduces the amount of training data required. Classic psychological studies suggest that humans consciously map AUs onto the basic emotion categories using a finite number of rules. On the other hand, recent studies suggest that humans recognize emotions unconsciously with a process that is perhaps best modeled by artificial neural networks (ANNs). This paper investigates these two claims. A comparison is made between detection of emotions directly from features vs. a two-step approach where we first detect AUs and use the AUs as input to either a rulebase or an ANN to recognize emotions. The results suggest that the two-step approach is possible with a small loss of accuracy and that biologically inspired classification techniques outperform those that approach the classification problem from a logical perspective, suggesting that biologically inspired classifiers are more suitable for computer-based analysis of facial behavior than logic inspired methods

[1]  Maja Pantic,et al.  Case-based reasoning for user-profiled recognition of emotions from face images , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[2]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[3]  Nicu Sebe,et al.  Facial expression recognition from video sequences: temporal and static modeling , 2003, Comput. Vis. Image Underst..

[4]  D. Keltner Facial Expressions of Emotion and Personality , 1996 .

[5]  Maja Pantic,et al.  Fully automatic facial feature point detection using Gabor feature based boosted classifiers , 2005, 2005 IEEE International Conference on Systems, Man and Cybernetics.

[6]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[7]  N. Ambady,et al.  Thin slices of expressive behavior as predictors of interpersonal consequences: A meta-analysis. , 1992 .

[8]  Qiang Ji,et al.  Active and dynamic information fusion for facial expression understanding from image sequences , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[10]  Daniel Gatica-Perez,et al.  Latent semantic analysis of facial action codes for automatic facial expression recognition , 2004, MIR '04.

[11]  Gwen Littlewort,et al.  Machine learning methods for fully automatic recognition of facial expressions and facial actions , 2004, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583).

[12]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Maja Pantic,et al.  Facial Action Unit Detection using Probabilistic Actively Learned Support Vector Machines on Tracked Facial Point Data , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[14]  L. Rothkrantz,et al.  Toward an affect-sensitive multimodal human-computer interaction , 2003, Proc. IEEE.

[15]  Maja Pantic,et al.  Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences , 2006, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[16]  JiQiang,et al.  Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences , 2005 .

[17]  Maja Pantic,et al.  Particle filtering with factorized likelihoods for tracking facial features , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..