This article deals with facial segmentation and lip tracking with feedback control for real-time animation of a synthetic 3D face model. Classical approaches consist in two successive steps: video analysis then synthesis. We want to build a global analysis/synthesis processing loop, where the image analysis needs the 3D synthesis and conversely. For that, we fit a generic 3D-face model on the speaker's face in our analysis algorithm for using synthesis information (like 3D information or face shape). This approach is inspired from control systems theory with feedback loops. The contribution of the paper is to use simple image processing techniques on available data, but to improve segmentation through the feedback loop. Moreover, we propose a robust lip corners tracking based on estimation motion algorithm. The speaker is only asked to be in front of the camera with the mouth closed at the beginning of the video session (neutral position). This allows to do a quick initialisation step in order to fit the 3D-face model. Results show that real-time (30Hz) and robust performances are achievable under real-world conditions, which are two key issues for face and lip tracking applications
[1]
Marc Chaumont,et al.
Robust and real-time 3D-face model extraction
,
2005,
IEEE International Conference on Image Processing 2005.
[2]
C. Benoît,et al.
A set of French visemes for visual speech synthesis
,
1994
.
[3]
J. Ahlberg.
REAL-TIME FACIAL FEATURE TRACKING USING AN ACTIVE MODEL WITH FAST IMAGE WARPING
,
2001
.
[4]
Demetri Terzopoulos,et al.
Snakes: Active contour models
,
2004,
International Journal of Computer Vision.
[5]
J. MacQueen.
Some methods for classification and analysis of multivariate observations
,
1967
.
[6]
Franck Luthon,et al.
Nonlinear color space and spatiotemporal MRF for hierarchical segmentation of face features in video
,
2004,
IEEE Transactions on Image Processing.
[7]
Tsuhan Chen,et al.
Audio-visual integration in multimodal communication
,
1998,
Proc. IEEE.