Face to virtual face

The first virtual humans appeared in the early 1980s in such films as Dreamflight (1982) and The Juggler (1982). Pioneering work in the ensuing period focused on realistic appearance in the simulation of virtual humans. In the 1990s, the emphasis has shifted to real-time animation and interaction in virtual worlds. Virtual humans have begun to inhabit virtual worlds and so have we. To prepare our place in the virtual world we first develop techniques for the automatic representation of a human face capable of being animated in real time using both video and audio input. The objective is for one's representative to look, talk, and behave like oneself in the virtual world. Furthermore, the virtual inhabitants of this world should be able to see our avatars and to react to what we say and to the emotions we convey. We sketch an overview of the problems related to the analysis and synthesis of face-to-virtual-face communication in a virtual world. We describe different components of our system for real-time interaction and communication between a cloned face representing a real person and an autonomous virtual face. It provides an insight into the various problems and gives particular solutions adopted in reconstructing a virtual clone capable of reproducing the shape and movements of the real person's face. It includes the analysis of the facial expression and speech of the cloned face, which can be used to elicit a response from the autonomous virtual human with both verbal and nonverbal facial movements synchronized with the audio voice.

[1]  Frederic I. Parke,et al.  A parametric model for human faces. , 1974 .

[2]  Norman I. Badler,et al.  Animating facial expressions , 1981, SIGGRAPH '81.

[3]  James F. Blinn,et al.  A Generalization of Algebraic Surface Drawing , 1982, TOGS.

[4]  Parke,et al.  Parameterized Models for Facial Animation , 1982, IEEE Computer Graphics and Applications.

[5]  Thomas W. Sederberg,et al.  Free-form deformation of solid geometric models , 1986, SIGGRAPH.

[6]  Keith Waters,et al.  A muscle model for animation three-dimensional facial expression , 1987, SIGGRAPH.

[7]  David R. Forsey,et al.  Hierarchical B-spline refinement , 1988, SIGGRAPH.

[8]  Alan L. Yuille,et al.  Feature extraction from faces using deformable templates , 1989, Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Clea T. Waite,et al.  The facial action control editor, face : a parametric facial expression editor for computer generated animation , 1989 .

[10]  Demetri Terzopoulos,et al.  Physically-based facial modelling, analysis, and animation , 1990, Comput. Animat. Virtual Worlds.

[11]  Lance Williams,et al.  Performance-driven facial animation , 1990, SIGGRAPH.

[12]  N. Magnenat-Thalmann,et al.  Synthetic actors in computer-generated 3D films , 1990 .

[13]  Tsuneya Kurihara,et al.  A Transformation Method for Modeling and Animation of the Human Face from Photographs , 1991 .

[14]  Demetri Terzopoulos,et al.  Techniques for Realistic Facial Modeling and Animation , 1991 .

[15]  Daniel Thalmann,et al.  Sculpting with the `ball and mouse' metaphor , 1991 .

[16]  Daniel Thalmann,et al.  Simulation of Facial Muscle Actions Based on Rational Free Form Deformations , 1992, Comput. Graph. Forum.

[17]  Takaaki Akimoto,et al.  Automatic creation of 3D facial models , 1993, IEEE Computer Graphics and Applications.

[18]  Carol Wang,et al.  Langwidere: A Hierarchical Spline Based Facial Animation System with Simulated Muscles , 1993 .

[19]  Demetri Terzopoulos,et al.  Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[20]  Mark Steedman,et al.  Animated conversation: rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents , 1994, SIGGRAPH.

[21]  Daniel Thalmann,et al.  Real-time facial interaction , 1994 .

[22]  Michael Isard,et al.  3D position, attitude and shape input using video tracking of hands and lips , 1994, SIGGRAPH.

[23]  Alex Pentland,et al.  Facial expression recognition using a dynamic model and motion energy , 1995, Proceedings of IEEE International Conference on Computer Vision.

[24]  Demetri Terzopoulos,et al.  Realistic modeling for facial animation , 1995, SIGGRAPH.

[25]  Dimitris N. Metaxas,et al.  The integration of optical flow and deformable models with applications to human face shape and motion estimation , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[26]  K.R. Thorisson,et al.  Layered modular action control for communicative humanoids , 1997, Proceedings. Computer Animation '97 (Cat. No.97TB100120).

[27]  Daniel Thalmann,et al.  Virtual Human Representation and Communication in VLNet , 1997, IEEE Computer Graphics and Applications.

[28]  Fabio Lavagetto,et al.  MPEG-4: Audio/video and synthetic graphics/audio for mixed media , 1997, Signal Process. Image Commun..

[29]  Fabio Lavagetto,et al.  MPEG-4:Audio/Video and Synthetic Graphics/Audio for Real-Time , 1997 .

[30]  Nadia Magnenat-Thalmann,et al.  Automatic 3D cloning and real-time animation of a human face , 1997, Proceedings. Computer Animation '97 (Cat. No.97TB100120).

[31]  Nadia Magnenat-Thalmann,et al.  Dirichlet free-form deformations and their application to hand simulation , 1997, Proceedings. Computer Animation '97 (Cat. No.97TB100120).

[32]  Prem Kalra,et al.  Direct Face-to-Face Communication Between Real and Virtual Humans , 1998 .

[33]  John Yen,et al.  Emotionally expressive agents , 1999, Proceedings Computer Animation 1999.

[34]  Prem Kalra,et al.  MODEL BASED FACE RECONSTRUCTION FOR ANIMATION , 1999 .