Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

Meeting and lecture room technology is a burgeoning field. Such technology can provide real-time support for physically present participants, for online remote participation, or for offline access to meetings or lectures. Capturing relevant information from meetings or lectures is necessary to provide this kind of support. Multimedia presentation of this captured information requires a lot of attention. In this article, we focus on models and associated algorithms that steer the virtual presenter's presentation animations. In our approach, we generate the presentations from a script describing the synchronization of speech, gestures, and movements. The script has also a channel devoted to presentation sheets (slides) and sheet changes, which we assume are an essential part of the presentation. To present and explain information, this 3D humanoid presenter uses output channels such as speech and animation of posture, painting, and involuntary movements

[1]  Dirk Heylen,et al.  Virtual meeting rooms: from observation to simulation , 2007, AI & SOCIETY.

[2]  Arnold W. M. Smeulders,et al.  An Integrated Multimedia Approach to Cultural Heritage e-Documents , 2002, MM 2002.

[3]  A. Kendon An Agenda for Gesture Studies , 2007 .

[4]  Anton Nijholt,et al.  Introducing an Embodied Virtual Presenter Agent in a Virtual Meeting Room , 2005, Artificial Intelligence and Applications.

[5]  D. Chaffin,et al.  The effects of speed variation on joint kinematics during multisegment reaching movements , 1999 .

[6]  C. Pelachaud,et al.  EXCERCISES OF STYLE FOR VIRTUAL HUMANS , 2002 .

[7]  D. Tweed,et al.  Three-dimensional model of the human eye-head saccadic system. , 1997, Journal of neurophysiology.

[8]  I. Scott MacKenzie,et al.  Extending Fitts' law to two-dimensional tasks , 1992, CHI.

[9]  Mitsuru Ishizuka,et al.  Journal of Visual Languages , 2002 .

[10]  Norman I. Badler,et al.  Design of a Virtual Human Presenter , 2000, IEEE Computer Graphics and Applications.

[11]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[12]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[13]  Norman I. Badler,et al.  Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs , 2000, Graph. Model..

[14]  Ken Perlin,et al.  Real Time Responsive Animation with Personality , 1995, IEEE Trans. Vis. Comput. Graph..

[15]  Norman I. Badler,et al.  Creating Interactive Virtual Humans: Some Assembly Required , 2002, IEEE Intell. Syst..

[16]  D. McNeill Hand and Mind: What Gestures Reveal about Thought , 1992 .

[17]  T. Rist Webpersona: a Life-like Presentation Agent for the World-wide Web , 1997 .

[18]  Thomas Rist,et al.  WebPersona: a lifelike presentation agent for the World-Wide Web , 1998, Knowl. Based Syst..