Facial Expression Recognition Using FAPs-Based 3DMMM

A 3D modular morphable model (3DMMM) is introduced to deal with facial expression recognition. The 3D Morphable Model (3DMM) contains 3D shape and 2D texture information of faces extracted using conventional Principal Component Analysis (PCA). In this work, modular PCA approach is used. A face is divided into six modules according to different facial features which are categorized based on Facial Animation Parameters (FAP). Each region will be treated separately in the PCA analysis. Our work is about recognizing the six basic facial expressions, provided that the properties of a facial expression are satisfied. Given a 2D image of a subject with facial expression, a matched 3D model for the image is found by fitting them to our 3D MMM. The fitting is done according to the modules; it will be in order of the importance modules in facial expression recognition (FER). Each module is assigned a weighting factor based on their position in priority list. The modules are combined and we can recognize the facial expression by measuring the similarity (mean square error) between input image and the reconstructed 3D face model.

[1]  Arman Savran,et al.  Bosphorus Database for 3D Face Analysis , 2008, BIOID.

[2]  Azriel Rosenfeld,et al.  Face recognition: A literature survey , 2003, CSUR.

[3]  Kostas Karpouzis,et al.  Parameterized Facial Expression Synthesis Based on MPEG-4 , 2002, EURASIP J. Adv. Signal Process..

[4]  Fabio Lavagetto,et al.  The facial animation engine: toward a high-level interface for the design of MPEG-4 compliant animated faces , 1999, IEEE Trans. Circuits Syst. Video Technol..

[5]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[6]  Sami Romdhani,et al.  3 D Morphable Face Model , a Unified Approach for Analysis and Synthesis of Images , 2022 .

[7]  Rama Chellappa,et al.  Face Processing: Advanced Modeling and Methods , 2006, J. Electronic Imaging.

[8]  Zhiwei Zhu,et al.  Dynamic Facial Expression Analysis and Synthesis With MPEG-4 Facial Animation Parameters , 2008, IEEE Transactions on Circuits and Systems for Video Technology.

[9]  Anshul Sharma,et al.  A method to infer emotions from facial Action Units , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[10]  Cheng-Chin Chiang,et al.  A Component-based Face Synthesizing Method , 2009 .

[11]  Maja Pantic,et al.  Automatic Analysis of Facial Expressions: The State of the Art , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[12]  Zhigang Deng,et al.  Computer Facial Animation: A Survey , 2008 .

[13]  Beat Fasel,et al.  Automatic facial expression analysis: a survey , 2003, Pattern Recognit..

[14]  Elisa Bertino,et al.  Biometrics and identity management in healthcare applications , 2009 .

[15]  Vijayan K. Asari,et al.  An improved face recognition technique based on modular PCA approach , 2004, Pattern Recognit. Lett..

[16]  Fernando De la Torre,et al.  Interactive region-based linear 3D face models , 2011, SIGGRAPH 2011.

[17]  Zheng Li,et al.  Robust facial expression recognition based on RPCA and AdaBoost , 2009, 2009 10th Workshop on Image Analysis for Multimedia Interactive Services.

[18]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[19]  Hans-Peter Seidel,et al.  Fitting a Morphable Model to 3D Scans of Faces , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[20]  I. King,et al.  Localized Principal Component Analysis Learning for Face Feature Extraction and Recognition , 1997 .