A Novel Dataset for Real-Life Evaluation of Facial Expression Recognition Methodologies

One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition FER datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets are recorded using a fixed camera deployment with a constant background and static ambient settings. In a real-life scenario, FER systems are expected to deal with changing ambient conditions, dynamic background, varying camera angles, different face size, and other human-related variations. Accordingly, in this work, three FER datasets are collected over a period of six months, keeping in view the limitations of existing datasets. These datasets are collected from YouTube, real world talk shows, and real world interviews. The most widely used FER methodologies are implemented, and evaluated using these datasets to analyze their performance in real-life situations.

[1]  Michael J. Lyons,et al.  Coding facial expressions with Gabor wavelets , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[2]  Chandra Kambhamettu,et al.  VADANA: A dense dataset for facial image analysis , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[3]  Sanjay Tanwani,et al.  Recognition of Facial Expressions using Local Binary Patterns of Important Facial Parts , 2013 .

[4]  Terence Sim,et al.  The CMU Pose, Illumination, and Expression Database , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Eun-Soo Kim,et al.  Human facial expression recognition using curvelet feature extraction and normalized mutual information feature selection , 2014, Multimedia Tools and Applications.

[6]  Hong Man,et al.  Face recognition based on multi-class mapping of Fisher scores , 2005, Pattern Recognit..

[7]  Eun-Soo Kim,et al.  Facial expression recognition using active contour-based face detection, facial movement-based feature extraction, and non-linear feature selection , 2014, Multimedia Systems.

[8]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[9]  Tae-Seong Kim,et al.  AN OPTICAL FLOW FEATURE-BASED ROBUST FACIAL EXPRESSION RECOGNITION WITH HMM FROM VIDEO , 2013 .

[10]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[11]  Takeo Kanade,et al.  Multi-PIE , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[12]  Sungyoung Lee,et al.  Human Facial Expression Recognition Using Stepwise Linear Discriminant Analysis and Hidden Conditional Random Fields , 2015, IEEE Transactions on Image Processing.

[13]  Feng Chen,et al.  Facial expression recognition and its application based on curvelet transform and PSO-SVM , 2013 .

[14]  Oksam Chae,et al.  Local Directional Number Pattern for Face Analysis: Face and Expression Recognition , 2013, IEEE Transactions on Image Processing.

[15]  Fei Chen,et al.  A Natural Visible and Infrared Facial Expression Database for Expression Recognition and Emotion Inference , 2010, IEEE Transactions on Multimedia.

[16]  Arun Ross,et al.  Can facial cosmetics affect the matching accuracy of face recognition systems? , 2012, 2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS).