Facial Feature Extraction Using an Active Appearance Model on the iPhone

Extracting and understanding human emotion plays an important role in the interaction between humans and machine communication systems. The most expressive way to display human emotion is through facial expression analysis. In this paper, we propose a novel extraction and recognition method for facial expression and emotion on mobile cameras and formulate a classification model for facial emotions using the variance of the estimated landmark points. Sixty five feature points are identified to extract the feature points from the input face and then the variance values of the point locations utilized to recognize facial emotions by comparing the results with a weighted fuzzy k-NN classification. Three types of facial emotion are recognized and classified: neutral, happy or angry. To evaluate the performance of the proposed algorithm, we assess the ratio of success using iPhone camera views. The experimental results show that the proposed method performs well in the recognition of facial emotion, and is sufficient to warrant its immediate application in mobile environments.

[1]  Timothy F. Cootes,et al.  Interpreting face images using active appearance models , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[2]  V. Bruce,et al.  Testing face processing skills in children , 2000 .

[3]  Yong-Hwan Lee Detection and Recognition of Facial Emotion using Bezier Curves , 2013 .

[4]  Nicu Sebe,et al.  Multimodal approaches for emotion recognition: a survey , 2005, IS&T/SPIE Electronic Imaging.

[5]  Yasunari Yoshitomi,et al.  Facial expression recognition for speaker using thermal image processing and speech recognition system , 2010 .

[6]  L. Teijeiro-Mosquera,et al.  Performance of active appearance model-based pose-robust face recognition , 2011 .

[7]  Kostas Karpouzis,et al.  Emotion recognition through facial expression analysis based on a neurofuzzy network , 2005, Neural Networks.

[8]  James M. Keller,et al.  A fuzzy K-nearest neighbor algorithm , 1985, IEEE Transactions on Systems, Man, and Cybernetics.

[9]  Maja Pantic,et al.  Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences , 2006, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  S. Hashimoto,et al.  Automated feature extraction of face image and its applications , 1995, Proceedings 4th IEEE International Workshop on Robot and Human Communication.

[11]  P. Martins Active appearance models for facial expression recognition and monocular head pose estimation , 2004 .

[12]  Sarah Jane Delany k-Nearest Neighbour Classifiers , 2007 .

[13]  Maja Pantic,et al.  Spontaneous vs. posed facial behavior: automatic analysis of brow actions , 2006, ICMI '06.

[14]  Hatice Gunes,et al.  Automatic Temporal Segment Detection and Affect Recognition From Face and Body Display , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[15]  Thomas S. Huang,et al.  Emotion Recognition from Facial Expressions using Multilevel HMM , 2000 .

[16]  Maja Pantic,et al.  Facial action recognition for facial expression analysis from static face images , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[17]  Kensuke Baba,et al.  On the Order of Search for Personal Identification with Biometric Images , 2013, J. Wirel. Mob. Networks Ubiquitous Comput. Dependable Appl..