Emotion recognition in the wild

For inferring the affective state of a person from data, captured in real-world conditions, methods which can perform emotion analysis ‘in the wild’ are required. Here, the term ‘in the wild’ signifies different environments/scenes and background noise, illumination conditions, head pose and occlusion. Automatic emotion recognition hasmade a significant progress in last two decades. However, such developed frameworks have been strictly employed to data collected in controlled laboratory settings with frontal faces, perfect illumination and posed expressions. On the contrary, images and videos on the WWW have been captured in different, unconstrained environments and this poses a big challenge to automatic facial emotion recognition methods. This special issue addresses the problem of emotion recognition in challenging conditions and is based on the recent series of Emotion recognition in the Wild (EmotiW) challenge. Recently, the first EmotiW challenge [2] brought together researchers working on emotion recognition and the acted facial expressions in the wild (AFEW) [4] database formed the baseline for the challenge. AFEW has been created from movies using a subtitle parsing based approach. The short video clips for which the subtitle contained words related

[1]  Shiguang Shan,et al.  Learning Mid-level Words on Riemannian Manifold for Action Recognition , 2015, ArXiv.

[2]  Tamás D. Gedeon,et al.  Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol , 2014, ICMI.

[3]  Tamás D. Gedeon,et al.  Emotion recognition in the wild challenge (EmotiW) challenge and workshop summary , 2013, ICMI '13.

[4]  Ying Chen,et al.  Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the Wild , 2014, ICMI.

[5]  Tamás D. Gedeon,et al.  Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015 , 2015, ICMI.

[6]  Albert Ali Salah,et al.  Combining modality-specific extreme learning machines for emotion recognition in the wild , 2014, Journal on Multimodal User Interfaces.

[7]  Sascha Meudt,et al.  Revisiting the EmotiW challenge: how wild is it really? , 2015, Journal on Multimodal User Interfaces.

[8]  Christopher Joseph Pal,et al.  EmoNets: Multimodal deep learning approaches for emotion recognition in video , 2015, Journal on Multimodal User Interfaces.

[9]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[10]  Razvan Pascanu,et al.  Combining modality specific deep neural networks for emotion recognition in video , 2013, ICMI '13.

[11]  Aleix M. Martínez,et al.  A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives , 2012, J. Mach. Learn. Res..

[12]  Shiguang Shan,et al.  Video modeling and learning on Riemannian manifold for emotion recognition in the wild , 2015, Journal on Multimodal User Interfaces.

[13]  Tamás D. Gedeon,et al.  Collecting Large, Richly Annotated Facial-Expression Databases from Movies , 2012, IEEE MultiMedia.

[14]  Tamás D. Gedeon,et al.  Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark , 2011, 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops).

[15]  Soo-Young Lee,et al.  Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition , 2015, ICMI.

[16]  Tong Zhang,et al.  Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis , 2015, Journal on Multimodal User Interfaces.

[17]  Ying Chen,et al.  Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild , 2015, Journal on Multimodal User Interfaces.

[18]  Shiguang Shan,et al.  Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the Wild , 2014, ICMI.