Making machines understand facial motion and expressions like humans do

Complete interaction amongst humans and machines unavoidably needs computers to understand human emotions. Most emotive information comes from facial motion and expression. This article presents the design of a new procedure for image analysis that is able to understand facial actions on monocular video sequences, without imposing restrictions on the speaker or its environment. The exposed technique follows a global-to-specific analysis approach that tries to imitate the way people analyze face motion: by dividing this analysis in processes of different level of detail.

[1]  N.D. Georganas,et al.  3D head pose recovery for interactive virtual reality avatars , 2001, IMTC 2001. Proceedings of the 18th IEEE Instrumentation and Measurement Technology Conference. Rediscovering Measurement in the Age of Informatics (Cat. No.01CH 37188).

[2]  Yung-Chang Chen,et al.  Virtual Talk: a model-based virtual phone using a layered audio-visual integration , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[3]  Gary R. Bradski,et al.  Real time face and object tracking as a component of a perceptual user interface , 1998, Proceedings Fourth IEEE Workshop on Applications of Computer Vision. WACV'98 (Cat. No.98EX201).

[4]  Jacob Ström Model-Based Real-Time Head Tracking , 2002, EURASIP J. Adv. Signal Process..

[5]  Takeo Kanade,et al.  Recognizing Action Units for Facial Expression Analysis , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Steve McLaughlin,et al.  Comparative study of textural analysis techniques to characterise tissue from intravascular ultrasound , 1996, Proceedings of 3rd IEEE International Conference on Image Processing.

[7]  Jean-Luc Dugelay,et al.  Facial expression analysis robust to 3D head pose motion , 2002, Proceedings. IEEE International Conference on Multimedia and Expo.

[8]  Jean-Luc Dugelay,et al.  Eye state tracking for face cloning , 2001, Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205).

[9]  Jean-Luc Dugelay,et al.  A visual analysis/synthesis feedback loop for accurate face tracking , 2001, Signal Process. Image Commun..

[10]  Jean-Luc Dugelay,et al.  Eyebrow Movement Analysis over Real-Time Video Sequences for Synthetic Representation , 2002, AMDO.

[11]  Satoshi Nakamura,et al.  Model-based lip synchronization with automatically translated synthetic voice toward a multi-modal translation system , 2001, IEEE International Conference on Multimedia and Expo, 2001. ICME 2001..

[12]  Emil M. Petriu,et al.  3-D head pose recovery for interactive virtual reality avatars , 2002, IEEE Trans. Instrum. Meas..