Style translation for human motion

Style translation is the process of transforming an input motion into a new style while preserving its original content. This problem is motivated by the needs of interactive applications, which require rapid processing of captured performances. Our solution learns to translate by analyzing differences between performances of the same content in input and output styles. It relies on a novel correspondence algorithm to align motions, and a linear time-invariant model to represent stylistic differences. Once the model is estimated with system identification, our system is capable of translating streaming input with simple linear operations at each frame.

[1]  F. Fairman Introduction to dynamic systems: Theory, models and applications , 1979, Proceedings of the IEEE.

[2]  Lennart Ljung,et al.  System Identification: Theory for the User , 1987 .

[3]  Biing-Hwang Juang,et al.  Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.

[4]  Robert F. Stengel,et al.  Optimal Control and Estimation , 1994 .

[5]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[6]  Ken-ichi Anjyo,et al.  Fourier principles for emotion-based human figure animation , 1995, SIGGRAPH.

[7]  Ken Perlin,et al.  Real Time Responsive Animation with Personality , 1995, IEEE Trans. Vis. Comput. Graph..

[8]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[9]  Kenji Amaya,et al.  Emotion from Motion , 1996, Graphics Interface.

[10]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[11]  David J. Sturman,et al.  Computer Puppetry , 1998, IEEE Computer Graphics and Applications.

[12]  F. Sebastian Grassia,et al.  Practical Parameterization of Rotations Using the Exponential Map , 1998, J. Graphics, GPU, & Game Tools.

[13]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[14]  Zoran Popovic,et al.  Physically based motion transformation , 1999, SIGGRAPH.

[15]  Alex Pentland,et al.  Action Reaction Learning: Automatic Visual Analysis and Synthesis of Interactive Behaviour , 1999, ICVS.

[16]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[17]  David G. Stork,et al.  Pattern Classification (2nd ed.) , 1999 .

[18]  Joshua B. Tenenbaum,et al.  Separating Style and Content with Bilinear Models , 2000, Neural Computation.

[19]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[20]  Michael J. Black,et al.  Dynamic coupled component analysis , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[21]  Sung Yong Shin,et al.  Computer puppetry: An importance-based approach , 2001, TOGS.

[22]  David Salesin,et al.  Image Analogies , 2001, SIGGRAPH.

[23]  Petros Faloutsos,et al.  Composable controllers for physics-based character animation , 2001, SIGGRAPH.

[24]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[25]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[26]  M. Alex O. Vasilescu Human motion signatures: analysis, synthesis, recognition , 2002, Object recognition supported by user interaction for service robots.

[27]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[28]  Lucas Kovar,et al.  Footskate cleanup for motion capture editing , 2002, SCA '02.

[29]  BreglerChristoph,et al.  Motion capture assisted animation , 2002 .

[30]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[31]  Sung Yong Shin,et al.  On-line locomotion generation based on motion blending , 2002, SCA '02.

[32]  Martin A. Giese,et al.  Modeling of Movement Sequences Based on Hierarchical Spatial-Temporal Correspondence of Movement Primitives , 2002, Biologically Motivated Computer Vision.

[33]  Mira Dontcheva,et al.  Layered acting for character animation , 2003, ACM Trans. Graph..

[34]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[35]  Lucas Kovar,et al.  Flexible automatic motion blending with registration curves , 2003, SCA '03.

[36]  Joshua B. Tenenbaum,et al.  Learning style translation for the lines of a drawing , 2003, TOGS.

[37]  Jovan Popovic,et al.  Example-based control of human motion , 2004, SCA '04.

[38]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, SIGGRAPH 2004.

[39]  Martin A. Giese,et al.  Morphable Models for the Analysis and Synthesis of Complex Motion Patterns , 2000, International Journal of Computer Vision.

[40]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[41]  Sung Yong Shin,et al.  On‐line motion blending for real‐time locomotion generation , 2004, Comput. Animat. Virtual Worlds.

[42]  Stefano Soatto,et al.  Dynamic Textures , 2003, International Journal of Computer Vision.