Recognizing multiple emotion from ambiguous facial expressions on mobile platforms

Extracting and understanding of emotion is of high importance for the interaction between human and machine communication systems. The most expressive way to display the human’s emotion is through facial expression analysis. This paper proposes a multiple emotion recognition system that can recognize combinations of up to a maximum of three different emotions using an active appearance model (AAM), the proposed classification standard, and a k-nearest neighbor (k-NN) classifier in mobile environments. AAM can take the expression of variations that are calculated by the proposed classification standard according to changes in human expressions in real time. The proposed k-NN can classify basic emotions (normal, happy, sad, angry, surprise) as well as more ambiguous emotions by combining the basic emotions in real time, and each recognized emotion that can be subdivided has strength. Whereas most previous methods of emotion recognition recognize various kind of a single emotion, this paper recognizes various emotions with a combination of the five basic emotions. To be easily understood, the recognized result is presented in three ways on a mobile camera screen. The result of the experiment was an average 85 % recognition rate and a 40 % performance showed optimized emotions. The implemented system can be represented by one of the example for augmented reality on displaying combination of real face video and virtual animation with user’s avatar.

[1]  Zhang Youwei,et al.  Facial Expression Recognition Based on the Difference of Statistical Features , 2006, 2006 8th international Conference on Signal Processing.

[2]  Daijin Kim,et al.  Natural facial expression recognition using differential-AAM and manifold learning , 2009, Pattern Recognit..

[3]  Daijin Kim,et al.  A Natural Facial Expression Recognition Using Differential-AAM and k-NNS , 2008, 2008 Tenth IEEE International Symposium on Multimedia.

[4]  Michael J. Black,et al.  Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion , 1997, International Journal of Computer Vision.

[5]  Penio S. Penev,et al.  Local feature analysis: A general statistical theory for object representation , 1996 .

[6]  Chieh-Chih Wang,et al.  3D active appearance model for aligning faces in 2D images , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  H. Hahn,et al.  Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features , 2009 .

[8]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Yong-Hwan Lee,et al.  Facial Feature Extraction Using an Active Appearance Model on the iPhone , 2014, 2014 Eighth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing.

[10]  Jung Sung Uk,et al.  New Rectangle Feature Type Selection for Real-time Facial Expression Recognition , 2006 .

[11]  Hang-Bong Kang,et al.  3D Face Fitting Method Based on 2D Active Appearance Models , 2011, 2011 IEEE International Symposium on Multimedia.

[12]  Garrison W. Cottrell,et al.  Representing Face Images for Emotion Classification , 1996, NIPS.

[13]  Philip S. Yu,et al.  Top 10 algorithms in data mining , 2007, Knowledge and Information Systems.

[14]  Sridha Sridharan,et al.  Fourier Active Appearance Models , 2011, 2011 International Conference on Computer Vision.

[15]  L. Teijeiro-Mosquera,et al.  Performance of active appearance model-based pose-robust face recognition , 2011 .

[16]  Kostas Karpouzis,et al.  Emotion recognition through facial expression analysis based on a neurofuzzy network , 2005, Neural Networks.

[17]  G. Cottrell,et al.  EMPATH: A Neural Network that Categorizes Facial Expressions , 2002, Journal of Cognitive Neuroscience.

[18]  P. Martins Active appearance models for facial expression recognition and monocular head pose estimation , 2004 .

[19]  Franck Davoine,et al.  Facial expression recognition and synthesis based on an appearance model , 2004, Signal Process. Image Commun..

[20]  K. Scherer,et al.  Introducing the Geneva emotion recognition test: an example of Rasch-based test development. , 2014, Psychological assessment.

[21]  Timothy F. Cootes,et al.  Interpreting face images using active appearance models , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.