Automated facial expression recognition based on FACS action units

Automated recognition of facial expression is an important addition to computer vision research because of its relevance to the study of psychological phenomena and the development of human-computer interaction (HCI). We developed a computer vision system that automatically recognizes individual action units or action unit combinations in the upper face using hidden Markov models (HMMs). Our approach to facial expression recognition is based an the Facial Action Coding System (FACS), which separates expressions into upper and lower face action. We use three approaches to extract facial expression information: (1) facial feature point tracking; (2) dense flow tracking with principal component analysis (PCA); and (3) high gradient component detection (i.e. furrow detection). The recognition results of the upper face expressions using feature point tracking, dense flow tracking, and high gradient component detection are 85%, 93% and 85%, respectively.

[1]  Takeo Kanade,et al.  Automatically Recognizing Facial Expressions in the Spatio-Temporal Domain , 1999 .

[2]  Irfan Essa,et al.  Analysis, interpretation and synthesis of facial expressions , 1995 .

[3]  Michael J. Black,et al.  Recognizing facial expressions under rigid and non-rigid facial motions , 1995 .

[4]  Kenji Mase,et al.  Recognition of Facial Expression from Optical Flow , 1991 .

[5]  Robert M. Gray,et al.  An Algorithm for Vector Quantizer Design , 1980, IEEE Trans. Commun..

[6]  Marian Stewart Bartlett,et al.  Classifying Facial Action , 1995, NIPS.

[7]  J. N. Bassili Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. , 1979, Journal of personality and social psychology.

[8]  Jie Yang Hidden markov model for human performance modeling , 1994 .

[9]  Lawrence Sirovich,et al.  Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Marian Stewart Bartlett,et al.  Classifying Facial Actions , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Alex Pentland,et al.  Automatic lipreading by optical-flow analysis , 1989 .

[12]  M. Rosenblum,et al.  Human emotion recognition from motion using a radial basis function network architecture , 1994, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.

[13]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[14]  Demetri Terzopoulos,et al.  Analysis of facial images using physical and anatomical models , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[15]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[16]  Takeo Kanade,et al.  Optical flow estimation using wavelet motion model , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[17]  Larry S. Davis,et al.  Computing spatio-temporal representations of human faces , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[18]  David J. Fleet,et al.  Learning parameterized models of image motion , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.