A fusion process based on belief theory for classification of facial basic emotions

This paper presents a system of facial expressions classification based on a data fusion process using the belief theory. The considered expressions correspond to the six universal emotions (joy, surprise, disgust, sadness, anger, fear) as well as to the neutral expression. Since some of the six basic emotions are difficult to simulate by non-actor people, the performances of the classification system are evaluated only for four expressions (joy, surprise, disgust, and neutral). The proposed algorithm is based on the analysis of characteristic distances measuring the deformations of facial features, which are computed on skeletons of expression. The skeletons are the result of a contour segmentation process of facial permanent features (mouth, eyes and eyebrows). The considered distances are used to develop an expert system for classification. The performances and the limits of the recognition system and its ability to deal with different databases are highlighted thanks to the analysis of a great number of results on three different databases: the Hammal-Caplier database, the Cohn-Kanade database and the Cottrel database.

[1]  Zakia Hammal,et al.  Facial Expression Recognition Based on the Belief Theory: Comparison with Different Classifiers , 2005, ICIAP.

[2]  J. N. Bassili Facial motion in the perception of faces and of emotional expression. , 1978, Journal of experimental psychology. Human perception and performance.

[3]  Takeo Kanade,et al.  Feature-point tracking by optical flow discriminates subtle differences in facial expression , 1998, Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition.

[4]  S. Demleitner [Communication without words]. , 1997, Pflege aktuell.

[5]  A. Caplier,et al.  Automatic and Accurate Lip Tracking , 2003 .

[6]  Garrison W. Cottrell,et al.  Transmitting and decoding facial expressions of emotion , 2004 .

[7]  Philippe Smets,et al.  The Transferable Belief Model for Quantified Belief Representation , 1998 .

[8]  Nicu Sebe,et al.  Semisupervised learning of classifiers: theory, algorithms, and their application to human-computer interaction , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[10]  Jürg Kohlas,et al.  Handbook of Defeasible Reasoning and Uncertainty Management Systems , 2000 .

[11]  Maja Pantic,et al.  Expert system for automatic analysis of facial expressions , 2000, Image Vis. Comput..

[12]  J. Canny A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[14]  P. Ekman,et al.  What the face reveals : basic and applied studies of spontaneous expression using the facial action coding system (FACS) , 2005 .

[15]  Alice Caplier,et al.  Accurate and quasi-automatic lip tracking , 2004, IEEE Transactions on Circuits and Systems for Video Technology.

[16]  P. Smets Data fusion in the transferable belief model , 2000, Proceedings of the Third International Conference on Information Fusion.

[17]  Glenn Shafer,et al.  A Mathematical Theory of Evidence , 2020, A Mathematical Theory of Evidence.

[18]  Z. Hammal,et al.  Eyes and eyebrows parametric models for automatic segmentation , 2004, 6th IEEE Southwest Symposium on Image Analysis and Interpretation, 2004..

[19]  F. Prêteux,et al.  MPEG-4 compliant tracking of facial features in video sequences , 2022 .