Interactive Motion Modeling and Parameterization by Direct Demonstration

While interactive virtual humans are becoming widely used in education, training and therapeutic applications, building animations which are both realistic and parameterized in respect to a given scenario remains a complex and time-consuming task. In order to improve this situation, we propose a framework based on the direct demonstration and parameterization of motions. The presented approach addresses three important aspects of the problem in an integrated fashion: (1) our framework relies on an interactive real-time motion capture interface that empowers non-skilled animators with the ability to model realistic upper-body actions and gestures by direct demonstration; (2) our interface also accounts for the interactive definition of clustered example motions, in order to well represent the variations of interest for a given motion being modeled; and (3) we also present an inverse blending optimization technique which solves the problem of precisely parameterizing a cluster of example motions in respect to arbitrary spatial constraints. The optimization is efficiently solved online, allowing autonomous virtual humans to precisely perform learned actions and gestures in respect to arbitrarily given targets. Our proposed framework has been implemented in an immersive multi-tile stereo visualization system, achieving a powerful and intuitive interface for programming generic parameterized motions by demonstration.

[1]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[2]  Katsu Yamane,et al.  Synthesizing animations of human manipulation tasks , 2004, SIGGRAPH 2004.

[3]  Ronan Boulic,et al.  Motion constraint , 2009, The Visual Computer.

[4]  Peter-Pike J. Sloan,et al.  Artist‐Directed Inverse‐Kinematics Using Radial Basis Function Interpolation , 2001, Comput. Graph. Forum.

[5]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[6]  Atsuo Takanishi,et al.  Development of a biped walking robot compensating for three-axis moment by trunk motion , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).

[7]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[8]  William H. Press,et al.  Numerical Recipes 3rd Edition: The Art of Scientific Computing , 2007 .

[9]  Marcelo Kallmann,et al.  Analytical inverse kinematics with body posture control , 2008, Comput. Animat. Virtual Worlds.

[10]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[11]  Stacy Marsella,et al.  SmartBody: behavior realization for embodied conversational agents , 2008, AAMAS.

[12]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, ACM Trans. Graph..

[13]  Daniel Thalmann,et al.  Full-Body Avatar Control with Environment Awareness , 2009, IEEE Computer Graphics and Applications.

[14]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[15]  Stefan Kopp,et al.  Model-based animation of co-verbal gesture , 2002, Proceedings of Computer Animation 2002 (CA 2002).

[16]  Norman I. Badler,et al.  A kinematic model of the human spine and torso , 1991, IEEE Computer Graphics and Applications.

[17]  Marcelo Kallmann,et al.  Interactive Demonstration of Pointing Gestures for Virtual Trainers , 2009, HCI.

[18]  Ken-ichi Anjyo,et al.  Fourier principles for emotion-based human figure animation , 1995, SIGGRAPH.

[19]  GleicherMichael,et al.  Automated extraction and parameterization of motions in large data sets , 2004 .

[20]  Yue Zheng,et al.  Real-Time Generation of Interactive Virtual Human Behaviours , 2008, VISIGRAPP.

[21]  Masayuki Inaba,et al.  AutoBalancer: An Online Dynamic Balance Compensation Scheme for Humanoid Robots , 2000 .

[22]  Tomohiko Mukai,et al.  Geostatistical motion interpolation , 2005, SIGGRAPH '05.

[23]  Aude Billard,et al.  Learning human arm movements by imitation: : Evaluation of a biologically inspired connectionist architecture , 2000, Robotics Auton. Syst..

[24]  Jessica K. Hodgins,et al.  Performance animation from low-dimensional control signals , 2005, SIGGRAPH 2005.

[25]  Aaron Hertzmann,et al.  Active learning for real-time motion controllers , 2007, SIGGRAPH 2007.

[26]  Stefan Schaal,et al.  http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained , 2007 .

[27]  Thomas Rist,et al.  What Are They Going to Talk About ? Towards Life-Like Characters that Reflect on Interactions with Users , 2002 .

[28]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[29]  Jessica K. Hodgins,et al.  Construction and optimal search of interpolated motion graphs , 2007, ACM Trans. Graph..

[30]  Yue Zheng,et al.  Generating Human Interactive Behaviours Using the Windowed Viterbi Algorithm , 2008, GRAPP.

[31]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[32]  Norman I. Badler,et al.  Design of a Virtual Human Presenter , 2000, IEEE Computer Graphics and Applications.

[33]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..