REAL-TIME ANIMATION OF INTERACTIVE AGENTS: SPECIFICATION AND REALIZATION

Embodied agents are a powerful paradigm for current and future multimodal interfaces yet require high effort and expertise for their creation, assembly, and animation control. Therefore, open animation engines and high-level control languages are required to make embodied agents accessible to researchers and developers. We present EMBR, a new real-time character animation engine that offers a high degree of animation control via the EMBRScript language. We argue that a new layer of control, the animation layer, is necessary to keep the higher-level control layers (behavioral/functional) consistent and slim while allowing a unified and abstract access to the animation engine (e.g., for the procedural animation of nonverbal behavior). We also introduce new concepts for the high-level control of motion quality (spatial/temporal extent, power, fluidity). Finally, we describe the architecture of the EMBR engine, its integration into larger project contexts, and conclude with a concrete application.

[1]  Stefan Kopp,et al.  Synthesizing multimodal utterances for conversational agents , 2004, Comput. Animat. Virtual Worlds.

[2]  Stacy Marsella,et al.  SmartBody: behavior realization for embodied conversational agents , 2008, AAMAS.

[3]  Stefan Kopp,et al.  The Behavior Markup Language: Recent Developments and Challenges , 2007, IVA.

[4]  Han Noot,et al.  Variations in gesturing and speech by GESTYLE , 2005, Int. J. Hum. Comput. Stud..

[5]  Justine Cassell,et al.  BEAT: the Behavior Expression Animation Toolkit , 2001, Life-like characters.

[6]  Richard H. Bartels,et al.  Interpolating splines with local tension, continuity, and bias control , 1984, SIGGRAPH.

[7]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[8]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..

[9]  Mel Slater,et al.  Piavca: a framework for heterogeneous interactions with virtual characters , 2010, Virtual Reality.

[10]  Michael Neff,et al.  An annotation scheme for conversational gestures: how to economically capture timing and form , 2007, Lang. Resour. Evaluation.

[11]  Hans-Peter Seidel,et al.  Annotated New Text Engine Animation Animation Lexicon Animation Gesture Profiles MR : . . . JL : . . . Gesture Generation Video Annotated Gesture Script , 2007 .

[12]  Sotaro Kita,et al.  Movement Phase in Signs and Co-Speech Gestures, and Their Transcriptions by Human Coders , 1997, Gesture Workshop.

[13]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[14]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[15]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[16]  Mark Steedman,et al.  APML, a Markup Language for Believable Behavior Generation , 2004, Life-like characters.

[17]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[18]  M. Studdert-Kennedy Hand and Mind: What Gestures Reveal About Thought. , 1994 .