Motion Editing with the State Feedback Dynamic Model

In this paper, a novel motion editing tool, called the state feedback dynamic model, is proposed and demonstrated for the animators to edit the pre-existing motion capture data. The state feedback dynamic model is based on the linear time-invariant system (LTI). Compared with previous works, by this model, the animators need only modify a few keyframes manually, and the other frames can be adjusted automatically while preserving as much of the original quality as possible. It is a global modification on motion sequence. More important, the LTI model derives an explicit mapping between the high-dimensional motion capture data and low-dimensional hidden state variables. It transforms a number of possibly correlated joint angle variables into a smaller number of uncorrelated state variables. Then, the motion sequence is edited in state space, and which considers that the motion among joints is correlated. It is different from traditional methods which consider each joint as independent of each other. Finally, an effective algorithm is also developed to calculate the model parameters. Experimental results show that the generated animations through this method are natural and smooth.

[1]  Michael Gleicher,et al.  Comparing Constraint-Based Motion Editing Methods , 2001, Graph. Model..

[2]  Michael Gleicher,et al.  Motion editing with spacetime constraints , 1997, SI3D.

[3]  Michael F. Cohen,et al.  Interactive spacetime control for animation , 1992, SIGGRAPH.

[4]  Kari Pulli,et al.  Style translation for human motion , 2005, SIGGRAPH 2005.

[5]  Zicheng Liu,et al.  Hierarchical spacetime control , 1994, SIGGRAPH.

[6]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[7]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[8]  Lucas Kovar,et al.  Footskate cleanup for motion capture editing , 2002, SCA '02.

[9]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[10]  Ken-ichi Anjyo,et al.  Fourier principles for emotion-based human figure animation , 1995, SIGGRAPH.

[11]  Geoffrey E. Hinton,et al.  Parameter estimation for linear dynamical systems , 1996 .

[12]  Stephen J. Wright,et al.  Numerical Optimization , 2018, Fundamental Statistical Inference.

[13]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[14]  F. Sebastian Grassia,et al.  Practical Parameterization of Rotations Using the Exponential Map , 1998, J. Graphics, GPU, & Game Tools.

[15]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[16]  Gene F. Franklin,et al.  Feedback Control of Dynamic Systems , 1986 .

[17]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[18]  Stefano Soatto,et al.  Dynamic Textures , 2003, International Journal of Computer Vision.

[19]  Andrew P. Witkin,et al.  Spacetime constraints , 1988, SIGGRAPH.

[20]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..