Recognition of emotions from video using neural network models

In this paper, facial features from the video sequence are explored for characterizing the emotions. The emotions considered for this study are Anger, Fear, Happy, Sad and Neutral. For carrying out the proposed emotion recognition study, the required video data is collected from the studio, Center for Education Technology (CET), at Indian Institute of Technology (IIT) Kharagpur. The dynamic nature of the grey values of the pixels within the eye and mouth regions are used as the features to capture the emotion specific knowledge from the facial expressions. Multiscale morphological erosion and dilation operations are used to extract features from eye and mouth regions, respectively. The features extracted from left eye, right eye and mouth regions are used to develop the separate models for each emotion category. Autoassociative neural network (AANN) models are used to capture the distribution of the extracted features. The developed models are validated using subject dependent and independent emotion recognition studies. The overall performance of the proposed emotion recognition system is observed to be about 87%.

[1]  Kenji Mase,et al.  Recognition of Facial Expression from Optical Flow , 1991 .

[2]  Mohamed A. Deriche,et al.  Scale-Space Properties of the Multiscale Morphological Dilation-Erosion , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[4]  Larry S. Davis,et al.  Computing spatio-temporal representations of human faces , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Michael J. Black,et al.  Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion , 1995, Proceedings of IEEE International Conference on Computer Vision.

[6]  Kishore Prahallad,et al.  AANN: an alternative to GMM for pattern recognition , 2002, Neural Networks.

[7]  Bayya Yegnanarayana,et al.  Multimodal person authentication using speech, face and visual speech , 2008, Comput. Vis. Image Underst..

[8]  Anastasios Tefas,et al.  Frontal Face Authentication Using Discriminating Grids with Morphological Feature Vectors , 2000, IEEE Trans. Multim..

[9]  Takeo Kanade,et al.  Recognizing lower face action units for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[10]  B. Yegnanarayana,et al.  Artificial Neural Networks , 2004 .

[11]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[12]  L. Rothkrantz,et al.  Toward an affect-sensitive multimodal human-computer interaction , 2003, Proc. IEEE.