The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

In 2000, the Cohn-Kanade (CK) database was released for the purpose of promoting research into automatically detecting individual facial expressions. Since then, the CK database has become one of the most widely used test-beds for algorithm development and evaluation. During this period, three limitations have become apparent: 1) While AU codes are well validated, emotion labels are not, as they refer to what was requested rather than what was actually performed, 2) The lack of a common performance metric against which to evaluate new algorithms, and 3) Standard protocols for common databases have not emerged. As a consequence, the CK database has been used for both AU and emotion detection (even though labels for the latter have not been validated), comparison with benchmark algorithms is missing, and use of random subsets of the original database makes meta-analyses difficult. To address these and other concerns, we present the Extended Cohn-Kanade (CK+) database. The number of sequences is increased by 22% and the number of subjects by 27%. The target expression for each sequence is fully FACS coded and emotion labels have been revised and validated. In addition to this, non-posed sequences for several types of smiles and their associated metadata have been added. We present baseline results using Active Appearance Models (AAMs) and a linear support vector machine (SVM) classifier using a leave-one-out subject cross-validation for both AU and emotion detection for the posed data. The emotion and AU labels, along with the extended image data and tracked landmarks will be made available July 2010.

[1]  B. Everitt,et al.  Statistical methods for rates and proportions , 1973 .

[2]  J. Fleiss,et al.  Statistical methods for rates and proportions , 1973 .

[3]  Timothy F. Cootes,et al.  Active Appearance Models , 1998, ECCV.

[4]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[5]  Niko Brümmer,et al.  Application-independent evaluation of speaker detection , 2006, Comput. Speech Lang..

[6]  Simon Baker,et al.  Active Appearance Models Revisited , 2004, International Journal of Computer Vision.

[7]  Gwen Littlewort,et al.  Dynamics of Facial Expression Extracted Automatically from Video , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[8]  Mehryar Mohri,et al.  Confidence Intervals for the Area Under the ROC Curve , 2004, NIPS.

[9]  Takeo Kanade,et al.  Facial Expression Analysis , 2011, AMFG.

[10]  Gwen Littlewort,et al.  Fully Automatic Facial Action Recognition in Spontaneous Behavior , 2006, 7th International Conference on Automatic Face and Gesture Recognition (FGR06).

[11]  Gwen Littlewort,et al.  Automatic Recognition of Facial Actions in Spontaneous Expressions , 2006, J. Multim..

[12]  Tsuhan Chen,et al.  The painful face - Pain expression recognition using active appearance models , 2009, Image Vis. Comput..

[13]  John J. B. Allen,et al.  The handbook of emotion elicitation and assessment , 2007 .

[14]  Qiang Ji,et al.  Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Chih-Jen Lin,et al.  A Practical Guide to Support Vector Classication , 2008 .

[16]  L. Leyman,et al.  The Karolinska Directed Emotional Faces: A validation study , 2008 .

[17]  Lijun Yin,et al.  A high-resolution 3D dynamic facial expression database , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[18]  Maja Pantic,et al.  Social signal processing: Survey of an emerging domain , 2009, Image Vis. Comput..

[19]  J. Cohn,et al.  All Smiles are Not Created Equal: Morphology and Timing of Smiles Perceived as Amused, Polite, and Embarrassed/Nervous , 2009, Journal of nonverbal behavior.

[20]  Sridha Sridharan,et al.  Automatically detecting pain using facial actions , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[21]  Gwen Littlewort,et al.  Toward Practical Smile Detection , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Simon Lucey,et al.  Automated Facial Expression Recognition System , 2009, 43rd Annual 2009 International Carnahan Conference on Security Technology.

[24]  Tsuhan Chen,et al.  Reinterpreting the Application of Gabor Filters as a Manipulation of the Margin in Linear Support Vector Machines , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  Tsuhan Chen,et al.  Re-Interpreting the Application of Gabor Filters as a Manipulation of the Margin in Linear Support Vector Machines , 2010 .

[26]  Maja Pantic,et al.  A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[27]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[28]  Fernando De la Torre,et al.  Facial Expression Analysis , 2011, Visual Analysis of Humans.