Conversational virtual character for the Web

Talking virtual characters are graphical simulations of real or imaginary persons capable of human-like behaviour, most importantly talking and gesturing. Coupled with artificial intelligence (AI) techniques, the virtual characters are expected to represent the ultimate abstraction of a human-computer interface, the one where the computer looks, talks and acts like a human. Such an interface would include audio/video analysis and synthesis techniques combined with AI, dialogue management and a vast knowledge base in order to be able to respond quasi-intelligently to the users by: speech, gesture and even mood. While this goal lies further on in the future, we present an architecture that reaches towards it, at the same time aiming for a possibility of practical applications in the nearer future. Our architecture is aimed specifically at the Web. It involves a talking virtual character capable of involving in a fairly meaningful conversation with the user who types in the input.

[1]  Michael D. Alder,et al.  Introducing MegaHAL , 1998, CoNLL.

[2]  Nadia Magnenat-Thalmann,et al.  Facial deformations for MPEG-4 , 1998, Proceedings Computer Animation '98 (Cat. No.98EX169).

[3]  A. Murat Tekalp,et al.  Face and 2-D mesh animation in MPEG-4 , 2000, Signal Process. Image Commun..

[4]  Parke,et al.  Parameterized Models for Facial Animation , 1982, IEEE Computer Graphics and Applications.

[5]  Demetri Terzopoulos,et al.  Physically-based facial modelling, analysis, and animation , 1990, Comput. Animat. Virtual Worlds.

[6]  D. E. Pearson,et al.  Developments in model-based video coding , 1995, Proc. IEEE.

[7]  Joseph Weizenbaum,et al.  ELIZA—a computer program for the study of natural language communication between man and machine , 1966, CACM.

[8]  Hans Peter Graf,et al.  Sample-based synthesis of photo-realistic talking heads , 1998, Proceedings Computer Animation '98 (Cat. No.98EX169).

[9]  Keith Waters,et al.  Computer facial animation , 1996 .

[10]  Nadia Magnenat-Thalmann,et al.  Communicating with Autonomous Virtual Humans , 2000 .

[11]  Richard E. Parent,et al.  Layered construction for deformable animated characters , 1989, SIGGRAPH.

[12]  Igor S. Pandzic,et al.  Life on the Web , 2001, Softw. Focus.

[13]  Simon Beard,et al.  USABLE TTS FOR INTERNET SPEECH ON DEMAND , 2001 .

[14]  Gaël Sannier,et al.  From Photographs to Interactive Virtual Characters on the Web , 2000 .

[15]  Jörgen Ahlberg Using the active appearance algorithm for face and facial feature tracking , 2001, Proceedings IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems.

[16]  Frederic I. Parke,et al.  A parametric model for human faces. , 1974 .

[17]  Michael M. Cohen,et al.  Modeling Coarticulation in Synthetic Visual Speech , 1993 .

[18]  J William,et al.  IEEE Computer Graphics and Applications , 2019, Computer.

[19]  Keith Waters,et al.  A muscle model for animation three-dimensional facial expression , 1987, SIGGRAPH.

[20]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[21]  Peter Eisert,et al.  Speech Driven Synthesis of Talking Head Sequences , 1997 .

[22]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[23]  Daniel Thalmann,et al.  Simulation of Facial Muscle Actions Based on Rational Free Form Deformations , 1992, Comput. Graph. Forum.

[24]  Norman I. Badler,et al.  Animating facial expressions , 1981, SIGGRAPH '81.

[25]  N. Badler,et al.  Linguistic Issues in Facial Animation , 1991 .