Piavca: a framework for heterogeneous interactions with virtual characters

This paper presents a virtual character animation system for real- time multimodal interaction in an immersive virtual reality setting. Human to human interaction is highly multimodal, involving features such as verbal language, tone of voice, facial expression, gestures and gaze. This multimodality means that, in order to simulate social interaction, our characters must be able to handle many different types of interaction and many different types of animation, simultaneously. Our system is based on a model of animation that represents different types of animations as instantiations of an abstract function representation. This makes it easy to combine different types of animation. It also encourages the creation of behavior out of basic building blocks, making it easy to create and configure new behaviors for novel situations. The model has been implemented in Piavca, an open source character animation system.

[1]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[2]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[3]  Stacy Marsella,et al.  Tears and fears: modeling emotions and emotional behaviors in synthetic agents , 2001, AGENTS '01.

[4]  Mel Slater,et al.  Male Bodily Responses during an Interaction with a Virtual Woman , 2008, IVA.

[5]  Stefan Kopp,et al.  Towards integrated microplanning of language and iconic gesture for multimodal output , 2004, ICMI '04.

[6]  Matthew Stone,et al.  Living Hand to Mouth: Psychological Theories about Speech and Gesture in Interactive Dialogue Systems , 1999 .

[7]  Daniel Thalmann,et al.  Nonverbal communication interface for collaborative virtual environments , 1999, Virtual Reality.

[8]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..

[9]  A. Kendon Movement coordination in social interaction: some examples described. , 1970, Acta psychologica.

[10]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[11]  Kristinn R. Thórisson,et al.  Real-time decision making in multimodal face-to-face communication , 1998, AGENTS '98.

[12]  Marco Gillies,et al.  A Model of Interpersonal Attitude and Posture Generation , 2003, IVA.

[13]  Marc Alexa,et al.  Representing Animations by Principal Components , 2000, Comput. Graph. Forum.

[14]  Mel Slater,et al.  Body Centred Interaction in Immersive Virtual Environments , 1994 .

[15]  Stacy Marsella,et al.  Natural Behavior of a Listening Agent , 2005, IVA.

[16]  K. Chang,et al.  Embodiment in conversational interfaces: Rea , 1999, CHI '99.

[17]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[18]  Mel Slater,et al.  An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience , 2004, Comput. Graph. Forum.

[19]  Dirk Heylen,et al.  Combination of facial movements on a 3D talking head , 2004 .

[20]  Hannes Högni Vilhjálmsson,et al.  Augmenting Online Conversation through Automated Discourse Tagging , 2005, Proceedings of the 38th Annual Hawaii International Conference on System Sciences.

[21]  Justine Cassell,et al.  Requirements for an Architecture for Embodied Conversational Characters , 1999, Computer Animation and Simulation.

[22]  C. Pelachaud,et al.  GRETA. A BELIEVABLE EMBODIED CONVERSATIONAL AGENT , 2005 .

[23]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[24]  Mel Slater,et al.  Building Expression into Virtual Characters , 2006, Eurographics.

[25]  V. Laxon,et al.  The conservation of number, mother, water and a fried egg chez l'enfant. , 1970, Acta psychologica.

[26]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[27]  Sumedha Kshirsagar,et al.  A multilayer personality model , 2002, SMARTGRAPH '02.

[28]  Norman I. Badler,et al.  Eyes alive , 2002, ACM Trans. Graph..

[29]  Mel Slater,et al.  A Preliminary Study of Shy Males Interacting with a Virtual Female , 2007 .

[30]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[31]  H. James Hoover,et al.  InTml: a description language for VR applications , 2002, Web3D '02.

[32]  Nadia Magnenat-Thalmann,et al.  Personalised real-time idle motion synthesis , 2004, 12th Pacific Conference on Computer Graphics and Applications, 2004. PG 2004. Proceedings..

[33]  Ipke Wachsmuth,et al.  Max - A Multimodal Assistant in Virtual Reality Construction , 2003, Künstliche Intell..

[34]  Norman I. Badler,et al.  Simulating humans: computer graphics animation and control , 1993 .

[35]  F. Sebastian Grassia,et al.  Practical Parameterization of Rotations Using the Exponential Map , 1998, J. Graphics, GPU, & Game Tools.

[36]  Ricky Yeung,et al.  TBAG: a high level framework for interactive, animated 3D graphics applications , 1994, SIGGRAPH.