Spectral style transfer for human motion between independent actions

Human motion is complex and difficult to synthesize realistically. Automatic style transfer to transform the mood or identity of a character's motion is a key technology for increasing the value of already synthesized or captured motion data. Typically, state-of-the-art methods require all independent actions observed in the input to be present in a given style database to perform realistic style transfer. We introduce a spectral style transfer method for human motion between independent actions, thereby greatly reducing the required effort and cost of creating such databases. We leverage a spectral domain representation of the human motion to formulate a spatial correspondence free approach. We extract spectral intensity representations of reference and source styles for an arbitrary action, and transfer their difference to a novel motion which may contain previously unseen actions. Building on this core method, we introduce a temporally sliding window filter to perform the same analysis locally in time for heterogeneous motion processing. This immediately allows our approach to serve as a style database enhancement technique to fill-in non-existent actions in order to increase previous style transfer method's performance. We evaluate our method both via quantitative experiments, and through administering controlled user studies with respect to previous work, where significant improvement is observed with our approach.

[1]  Yong Cao,et al.  Style components , 2006, Graphics Interface.

[2]  A. W. M. van den Enden,et al.  Discrete Time Signal Processing , 1989 .

[3]  David J. Fleet,et al.  Multifactor Gaussian process models for style-content separation , 2007, ICML '07.

[4]  Lucas Kovar,et al.  Automated extraction and parameterization of motions in large data sets , 2004, ACM Trans. Graph..

[5]  Ziv Bar-Joseph,et al.  Modeling spatial and temporal variation in motion data , 2009, ACM Trans. Graph..

[6]  Pascal Fua,et al.  Style‐Based Motion Synthesis † , 2004, Comput. Graph. Forum.

[7]  Alan V. Oppenheim,et al.  Discrete-Time Signal Pro-cessing , 1989 .

[8]  Jinxiang Chai,et al.  Motion graphs++ , 2012, ACM Trans. Graph..

[9]  Jessica K. Hodgins,et al.  Realtime style transfer for unlabeled heterogeneous human motion , 2015, ACM Trans. Graph..

[10]  Jovan Popovic,et al.  Style translation for human motion , 2005, ACM Trans. Graph..

[11]  Seungjin Choi,et al.  Independent Component Analysis , 2009, Handbook of Natural Computing.

[12]  Jinxiang Chai,et al.  Synthesis and editing of personalized stylistic human motion , 2010, I3D '10.

[13]  Jessica K. Hodgins,et al.  Constraint-based motion optimization using a statistical dynamic model , 2007, ACM Trans. Graph..

[14]  Michael Gleicher,et al.  Automated extraction and parameterization of motions in large data sets , 2004, SIGGRAPH 2004.

[15]  E. Oja,et al.  Independent Component Analysis , 2013 .

[16]  Martin A. Giese,et al.  Morphable Models for the Analysis and Synthesis of Complex Motion Patterns , 2000, International Journal of Computer Vision.

[17]  David A. Forsyth,et al.  Generalizing motion edits with Gaussian processes , 2009, ACM Trans. Graph..

[18]  Daniel Thalmann,et al.  Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting , 2000, Comput. Graph. Forum.

[19]  Erkki Oja,et al.  Independent Component Analysis Aapo Hyvärinen, Juha Karhunen, , 2004 .

[20]  Kenji Amaya,et al.  Emotion from Motion , 1996, Graphics Interface.

[21]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[22]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.