Style translation for human motion

Style translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel correspondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is estimated with system identification, our system is capable of translating streaming input with simple linear operations at each frame.

[1]  GleicherMichael,et al.  Automated extraction and parameterization of motions in large data sets , 2004 .

[2]  Lucas Kovar,et al.  Flexible automatic motion blending with registration curves , 2003, SCA '03.

[3]  Lucas Kovar,et al.  Footskate cleanup for motion capture editing , 2002, SCA '02.

[4]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[5]  Sung Yong Shin,et al.  On‐line motion blending for real‐time locomotion generation , 2004, Comput. Animat. Virtual Worlds.

[6]  M. Alex O. Vasilescu Human motion signatures: analysis, synthesis, recognition , 2002, Object recognition supported by user interaction for service robots.

[7]  Hyun Joon Shin,et al.  Puppetry : An Importance-Based Approach To appear in ACM Transactions on Graphics journal , 2001 .

[8]  Martin A. Giese,et al.  Modeling of Movement Sequences Based on Hierarchical Spatial-Temporal Correspondence of Movement Primitives , 2002, Biologically Motivated Computer Vision.

[9]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[10]  Michael J. Black,et al.  Dynamic coupled component analysis , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[11]  Alex Pentland,et al.  Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour , 1999, ICVS.

[12]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[13]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, ACM Trans. Graph..

[14]  Ken Perlin,et al.  Real Time Responsive Animation with Personality , 1995, IEEE Trans. Vis. Comput. Graph..

[15]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[16]  Zoran Popovic,et al.  Physically based motion transformation , 1999, SIGGRAPH.

[17]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[18]  Joshua B. Tenenbaum,et al.  Learning style translation for the lines of a drawing , 2003, TOGS.

[19]  David Salesin,et al.  Image Analogies , 2001, SIGGRAPH.

[20]  Jovan Popovic,et al.  Example-based control of human motion , 2004, SCA '04.

[21]  Petros Faloutsos,et al.  Composable controllers for physics-based character animation , 2001, SIGGRAPH.

[22]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[23]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[24]  Richard A. Davis,et al.  Introduction to time series and forecasting , 1998 .

[25]  Biing-Hwang Juang,et al.  Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.

[26]  Joshua B. Tenenbaum,et al.  Separating Style and Content with Bilinear Models , 2000, Neural Computation.

[27]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[28]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[29]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[30]  David J. Sturman,et al.  Computer Puppetry , 1998, IEEE Computer Graphics and Applications.

[31]  Robert F. Stengel,et al.  Optimal Control and Estimation , 1994 .

[32]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[33]  Kenji Amaya,et al.  Emotion from Motion , 1996, Graphics Interface.

[34]  Martin A. Giese,et al.  Morphable Models for the Analysis and Synthesis of Complex Motion Patterns , 2000, International Journal of Computer Vision.

[35]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[36]  Shigeo Abe DrEng Pattern Classification , 2001, Springer London.

[37]  F. Fairman Introduction to dynamic systems: Theory, models and applications , 1979, Proceedings of the IEEE.

[38]  Lennart Ljung,et al.  System Identification: Theory for the User , 1987 .

[39]  F. Sebastian Grassia,et al.  Practical Parameterization of Rotations Using the Exponential Map , 1998, J. Graphics, GPU, & Game Tools.

[40]  Sung Yong Shin,et al.  On-line locomotion generation based on motion blending , 2002, SCA '02.

[41]  Ken-ichi Anjyo,et al.  Fourier principles for emotion-based human figure animation , 1995, SIGGRAPH.

[42]  David G. Stork,et al.  Pattern Classification (2nd ed.) , 1999 .