XSAMPL3D: An Action Description Language for the Animation of Virtual Characters

JVRB, 9(2012), no. 1. - In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.

[1]  Heni Ben Amor,et al.  Grasp synthesis from low‐dimensional probabilistic grasp models , 2008, Comput. Animat. Virtual Worlds.

[2]  Brigitte Krenn,et al.  RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA , 2004, ArXiv.

[3]  Heni Ben Amor,et al.  Grasp Recognition for Uncalibrated Data Gloves: A Machine Learning Approach , 2008, PRESENCE: Teleoperators and Virtual Environments.

[4]  Stefan Kopp,et al.  The Behavior Markup Language: Recent Developments and Challenges , 2007, IVA.

[5]  Masayuki Inaba,et al.  Dynamically-Stable Motion Planning for Humanoid Robots , 2002, Auton. Robots.

[6]  อนิรุธ สืบสิงห์,et al.  Data Mining Practical Machine Learning Tools and Techniques , 2014 .

[7]  Stefan Kopp,et al.  MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents , 2002 .

[8]  Ken Hinckley,et al.  Alice: easy to use interactive 3D graphics , 1997, UIST '97.

[9]  P. ed Hoschka,et al.  synchronized Multimedia Integration Language (SMIL) 1.0 Specification , 1998 .

[10]  Heni Ben Amor,et al.  An Animation System for Imitation of Object Grasping in Virtual Reality , 2006, ICAT.

[11]  Y. Guiard Asymmetric division of labor in human skilled bimanual action: the kinematic chain as a model. , 1987, Journal of motor behavior.

[12]  Daniel Thalmann,et al.  Direct 3D interaction with smart objects , 1999, VRST '99.

[13]  D. Thalmann,et al.  Planning collision-free reaching motions for interactive object manipulation and grasping , 2008, SIGGRAPH '08.

[14]  Christian Knöpfle,et al.  Real Time Rendering and Animation of Virtual Characters , 2007, Int. J. Virtual Real..

[15]  Heni Ben Amor,et al.  Action Capture: A VR-Based Method for Character Animation , 2011, Virtual Realities.

[16]  Mark R. Cutkosky,et al.  On grasp choice, grasp models, and the design of hands for manufacturing tasks , 1989, IEEE Trans. Robotics Autom..

[17]  A. Kendon Gesture: Visible Action as Utterance , 2004 .

[18]  Jean-Claude Latombe,et al.  Interactive manipulation planning for animated characters , 2000, Proceedings the Eighth Pacific Conference on Computer Graphics and Applications.

[19]  Katsu Yamane,et al.  Synthesizing animations of human manipulation tasks , 2004, ACM Trans. Graph..

[20]  Heni Ben Amor,et al.  From motion capture to action capture: a review of imitation learning techniques and their application to VR-based character animation , 2006, VRST '06.

[21]  Minhua Ma,et al.  Virtual human animation in natural language visualisation , 2007, Artificial Intelligence Review.

[22]  Norman I. Badler,et al.  Task-Level Object Grasping for Simulated Agents , 1996, Presence: Teleoperators & Virtual Environments.

[23]  K. Dautenhahn,et al.  The Mirror System, Imitation, and the Evolution of Language , 1999 .