Facial expression recognition in continuous videos using linear discriminant analysis

In this paper, we address the recognition of facial expressions in continuous videos. We introduce a view- and textureindependent approach that exploits facial action parameters estimated by an appearance-based 3D tracker. We represent the learned facial actions associated with different facial expressions by time series. These time series are then efficiently and compactly represented in Eigenspace and Fisherspace for subsequent recognition. The developed approach is fast and can be used online. Experiments demonstrated the effectiveness of the developed method.

[1]  Maja Pantic,et al.  Automatic Analysis of Facial Expressions: The State of the Art , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Simon Baker,et al.  Active Appearance Models Revisited , 2004, International Journal of Computer Vision.

[3]  Gwen Littlewort,et al.  Machine learning methods for fully automatic recognition of facial expressions and facial actions , 2004, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583).

[4]  Fadi Dornaika,et al.  Head and Facial Animation Tracking using Appearance-Adaptive Models and Particle Filters , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[5]  Takeo Kanade,et al.  Comprehensive database for facial expression analysis , 2000, Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).

[6]  Nicu Sebe,et al.  Facial expression recognition from video sequences: temporal and static modeling , 2003, Comput. Vis. Image Underst..

[7]  Jörgen Ahlberg,et al.  CANDIDE-3 - An Updated Parameterised Face , 2001 .

[8]  Donald J. Berndt,et al.  Using Dynamic Time Warping to Find Patterns in Time Series , 1994, KDD Workshop.