Facial Expression Recognition Using Model-Based Feature Extraction and Action Parameters Classification

This paper introduces an automatic facial expression recognition system which consists of two parts: facial feature extraction and facial expression recognition. The system applies the point distribution model and the gray-level model to find the facial features. Then the position variations of certain designated points on the facial feature are described by 10 action parameters (APs). There are two phases in the recognition process: the training phase and the recognition phase. In the training phase, given 90 different expressions, the system classifies the principal components of the APs of all training expressions into six different clusters. In the recognition phase, given a facial image sequence, it identifies the facial expressions by extracting the 10 APs, analyzes the principal components, and finally calculates the AP profile correlation for a higher recognition rate. In the experiments, our system has demonstrated that it can recognize the facial expression effectively.

[1]  David C. Hogg,et al.  Extending the Point Distribution Model Using Polar Coordinates , 1995, CAIP.

[2]  Timothy F. Cootes,et al.  Active Shape Models-Their Training and Application , 1995, Comput. Vis. Image Underst..

[3]  Stephen M. Omohundro,et al.  Nonlinear manifold learning for visual speech recognition , 1995, Proceedings of IEEE International Conference on Computer Vision.

[4]  M. Rosenblum,et al.  Human emotion recognition from motion using a radial basis function network architecture , 1994, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.

[5]  Timothy F. Cootes,et al.  Locating faces using statistical feature detectors , 1996, Proceedings of the Second International Conference on Automatic Face and Gesture Recognition.

[6]  Kenji Mase,et al.  Recognition of Facial Expression from Optical Flow , 1991 .

[7]  Timothy F. Cootes,et al.  Use of active shape models for locating structures in medical images , 1994, Image Vis. Comput..

[8]  J. N. Bassili Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face. , 1979, Journal of personality and social psychology.

[9]  Timothy F. Cootes,et al.  Automatic tracking, coding and reconstruction of human faces, using flexible appearance models , 1994 .

[10]  Andrew Blake,et al.  Determining facial expressions in real time , 1995, Proceedings of IEEE International Conference on Computer Vision.

[11]  David C. Hogg,et al.  An efficient method for contour tracking using active shape models , 1994, Proceedings of 1994 IEEE Workshop on Motion of Non-rigid and Articulated Objects.