Belief Theory Applied to Facial Expressions Classification

A novel and efficient approach to facial expression classification based on the belief theory and data fusion is presented and discussed. The considered expressions correspond to three (joy, surprise, disgust) of the six universal emotions as well as the neutral expression. A robust contour segmentation technique is used to generate an expression skeleton with facial permanent features (mouth, eyes and eyebrows). This skeleton is used to determine the facial features deformations occurring when an expression is present on the face defining a set of characteristic distances. In order to be able to recognize “pure” as well as “mixtures” of facial expressions, a belief-theory based fusion process is proposed. The performances and the limits of the proposed recognition method are highlighted thanks to the analysis of a great number of results on three different test databases: the Hammal-Caplier database, the Cohn-Kanade database and the Cottrel database. Preliminary results demonstrate the interest of the proposed approach, as well as its ability to recognize non separable facial expressions.

[1]  Thierry Dutoit,et al.  Passive versus active: Vocal classification system , 2005, 2005 13th European Signal Processing Conference.

[2]  A. Caplier,et al.  Automatic and Accurate Lip Tracking , 2003 .

[3]  M. Rombaut,et al.  A fusion process based on belief theory for classification of facial basic emotions , 2005, 2005 7th International Conference on Information Fusion.

[4]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Yasuyuki Saito,et al.  Facial expression analysis by integrating information of feature-point positions and gray levels of facial images , 2000, Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101).

[6]  Alice Caplier,et al.  Accurate and quasi-automatic lip tracking , 2004, IEEE Transactions on Circuits and Systems for Video Technology.

[7]  F. Prêteux,et al.  MPEG-4 compliant tracking of facial features in video sequences , 2022 .

[8]  Zakia Hammal,et al.  Parametric models for facial features segmentation , 2006, Signal Process..

[9]  Glenn Shafer,et al.  A Mathematical Theory of Evidence , 2020, A Mathematical Theory of Evidence.

[10]  Zakia Hammal,et al.  Eyes Segmentation Applied to Gaze Direction and Vigilance Estimation , 2005, ICAPR.

[11]  P. Smets Data fusion in the transferable belief model , 2000, Proceedings of the Third International Conference on Information Fusion.

[12]  Maja Pantic,et al.  Expert system for automatic analysis of facial expressions , 2000, Image Vis. Comput..

[13]  Z. Hammal,et al.  Eyes and eyebrows parametric models for automatic segmentation , 2004, 6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004..