Learnt inverse kinematics for animation synthesis

Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion-capture data, and those that perform inverse kinematics. In this paper, we present a method for performing animation synthesis of an articulated object (e.g. human body and a dog) from a minimal set of body joint positions, following the approach of inverse kinematics. We tackle this problem from a learning perspective. Firstly, we address the need for knowledge on the physical constraints of the articulated body, so as to avoid the generation of a physically impossible poses. A common solution is to heuristically specify the kinematic constraints for the skeleton model. In this paper however, the physical constraints of the articulated body are represented using a hierarchical cluster model learnt from a motion capture database. Additionally, we shall show that the learnt model automatically captures the correlation between different joints through simultaneous modelling of their angles. We then show how this model can be utilised to perform inverse kinematics in a simple and efficient manner. Crucially, we describe how IK is carried out from a minimal set of end-effector positions. Following this, we show how this "learnt inverse kinematics" framework can be used to perform animation syntheses on different types of articulated structures. To this end, the results presented include the retargeting of a flat surface walking animation to various uneven terrains to demonstrate the synthesis of a full human body motion from the positions of only the hands, feet and torso. Additionally, we show how the same method can be applied to the animation synthesis of a dog using only its feet and torso positions.

[1]  Shaogang Gong,et al.  The dynamics of linear combinations: tracking 3D skeletons of human subjects , 2002, Image Vis. Comput..

[2]  Peter-Pike J. Sloan,et al.  Artist‐Directed Inverse‐Kinematics Using Radial Basis Function Interpolation , 2001, Comput. Graph. Forum.

[3]  Jehee Lee Interactive Control of Avatars Animated with Human Motion Data , .

[4]  Pascal Fua,et al.  Style‐Based Motion Synthesis † , 2004, Comput. Graph. Forum.

[5]  R. Bowden Learning Statistical Models of Human Motion , 2000 .

[6]  Pascal Fua,et al.  Hierarchical implicit surface joint limits for human body tracking , 2005, Comput. Vis. Image Underst..

[7]  F. Sebastian Grassia,et al.  Practical Parameterization of Rotations Using the Exponential Map , 1998, J. Graphics, GPU, & Game Tools.

[8]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH '08.

[9]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[10]  Adrian Hilton,et al.  Learnt inverse kinematics for animation synthesis , 2006, Graph. Model..

[11]  David C. Hogg,et al.  Learning Variable-Length Markov Models of Behavior , 2001, Comput. Vis. Image Underst..

[12]  A. David Marshall,et al.  A Hierarchical Model of Dynamics for Tracking People with a Single Video Camera , 2000, BMVC.

[13]  Adrian Hilton,et al.  Realistic synthesis of novel human movements from a database of motion capture examples , 2000, Proceedings Workshop on Human Motion.

[14]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[15]  Michiel van de Panne,et al.  Motion synthesis by example , 1996 .

[16]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[17]  Mel Slater,et al.  An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience , 2004, Comput. Graph. Forum.

[18]  Anil K. Jain,et al.  Algorithms for Clustering Data , 1988 .

[19]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.