Human interfaces for computer graphics systems are now evolving towards a total multi-modal approach. Information gathered using visual, audio and motion capture systems are now becoming increasingly important within user-controlled virtual environments. This paper discusses real-time interaction through the visual analysis of human face feature. The underlying approach to recognize and analyze the facial movements of a real performance is described in detail. The output of the program is directly compatible with MPEG-4 standard parameters and therefore enhances the ability to use the available data in any other MPEG-4 compatible application. The real-time facial analysis system gives the user the ability to control the graphics system by means of facial expressions. This is used primarily with real-time facial animation systems, where the synthetic actor reproduces the animator's expression. The MPEG-4 standard mainly focuses on networking capabilities and it therefore offers interesting possibilities for teleconferencing, as the requirements for the network bandwidth are quite low.
[1]
Fabio Lavagetto,et al.
MPEG-4: Audio/video and synthetic graphics/audio for mixed media
,
1997,
Signal Process. Image Commun..
[2]
Daniel Thalmann,et al.
VHD: a system for directing real-time virtual actors
,
1999,
The Visual Computer.
[3]
Daniel Thalmann,et al.
A Flexible Architecture for Virtual Humans in Networked Collaborative Virtual Environments
,
1997,
Comput. Graph. Forum.
[4]
Nadia Magnenat-Thalmann,et al.
MPEG-4 compatible faces from orthogonal photos
,
1999,
Proceedings Computer Animation 1999.
[5]
Hans Peter Graf,et al.
Sample-based synthesis of photo-realistic talking heads
,
1998,
Proceedings Computer Animation '98 (Cat. No.98EX169).
[6]
Daniel Thalmann,et al.
Towards Natural Communication in Networked Collaborative Virtual Environments
,
1996
.
[7]
Pierre Poulin,et al.
Real-time facial animation based upon a bank of 3D facial expressions
,
1998,
Proceedings Computer Animation '98 (Cat. No.98EX169).