A Comparison of Facial Features and Fusion Methods for Emotion Recognition

Emotion recognition is an important part of human behavior analysis. It finds many applications including human-computer interaction, driver safety, health care, stress detection, psychological analysis, forensics, law enforcement and customer care. The focus of this paper is to use a pattern recognition framework based on facial expression features and two classifiers (linear discriminant analysis and k-nearest neighbor) for emotion recognition. The extended Cohn-Kanade database is used to classify 5 emotions, namely, ‘neutral, angry, disgust, happy, and surprise’. The Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), the Walsh-Hadamard Transform (FWHT) and a new 7-dimensional feature based on condensing the Facial Action Coding System (FACS) are compared. Ensemble systems using decision level, score fusion and Borda count are also studied. Fusion of the four features leads to slightly more than a 90 % accuracy.

[1]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[2]  Giuseppe Di Fabbrizio,et al.  EMOTION DETECTION IN EMAIL CUSTOMER CARE , 2013, Comput. Intell..

[3]  Ravi P. Ramachandran,et al.  Automated human behavioral analysis framework using facial feature extraction and machine learning , 2013, 2013 Asilomar Conference on Signals, Systems and Computers.

[4]  Yi-Cheng Zhang,et al.  Using SVM to design facial expression recognition for shape and texture features , 2010, 2010 International Conference on Machine Learning and Cybernetics.

[5]  Tamal Bose,et al.  Digital Signal and Image Processing , 2003 .

[6]  Thomas S. Huang,et al.  Features and fusion for expression recognition — A comparative analysis , 2012, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[7]  Jing Cai,et al.  The Research on Emotion Recognition from ECG Signal , 2009, 2009 International Conference on Information Technology and Computer Science.

[8]  Cuntai Guan,et al.  Fast emotion detection from EEG using asymmetric spatial filtering , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[9]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[10]  Nitesh V. Chawla,et al.  SMOTE: Synthetic Minority Over-sampling Technique , 2002, J. Artif. Intell. Res..

[11]  Arun Ross,et al.  An introduction to biometrics , 2008, ICPR 2008.

[12]  Neeta Nain,et al.  A Hybrid Method of Feature Extraction for Facial Expression Recognition , 2011, 2011 Seventh International Conference on Signal Image Technology & Internet-Based Systems.

[13]  Sridha Sridharan,et al.  Automatically detecting pain using facial actions , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[14]  Shashidhar G. Koolagudi,et al.  Speech Emotion Recognition Using Segmental Level Prosodic Analysis , 2011, 2011 International Conference on Devices and Communications (ICDeCom).

[15]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[16]  Driss Matrouf,et al.  Forensic speaker recognition , 2009, IEEE Signal Process. Mag..

[17]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, International Journal of Computer Vision.

[18]  R. Polikar,et al.  Ensemble based systems in decision making , 2006, IEEE Circuits and Systems Magazine.