Contact-Aware Retargeting of Skinned Motion

This paper introduces a motion retargeting method that preserves self-contacts and prevents interpenetration. Selfcontacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts. Likewise, interpenetration, such as a hand passing into the torso, are a typical artifact of motion estimation methods. The input to our method is a human motion sequence and a target skeleton and character geometry. The method identifies self-contacts and ground contacts in the input motion, and optimizes the motion to apply to the output skeleton, while preserving these contacts and reducing interpenetration. We introduce a novel geometry-conditioned recurrent network with an encoderspace optimization strategy that achieves efficient retargeting while satisfying contact constraints. In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works. We also show our method generalizes to motion estimated from human videos where we improve over previous works that produce noticeable interpenetration.

[1]  Edmond S. L. Ho,et al.  Spatial relationship preserving character motion adaptation , 2010, ACM Trans. Graph..

[2]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[3]  Taku Komura,et al.  A Deep Learning Framework for Character Motion Synthesis and Editing , 2016, ACM Trans. Graph..

[4]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[5]  Dani Lischinski,et al.  Learning character-agnostic motion for motion retargeting in 2D , 2019, ACM Trans. Graph..

[6]  Ronan Boulic,et al.  Full-body performance animation with Sequential Inverse Kinematics , 2008, Graph. Model..

[7]  Ludovic Righetti,et al.  Robust Physics‐based Motion Retargeting with Realistic Body Shapes , 2018, Comput. Graph. Forum.

[8]  Gabriel Zachmann,et al.  Collision Detection for Deformable Objects , 2004, Comput. Graph. Forum.

[9]  Taesoo Kwon,et al.  Interactive character posing with efficient collision handling , 2020, Comput. Animat. Virtual Worlds.

[10]  Kwang-Jin Choi,et al.  Online motion retargetting , 2000, Comput. Animat. Virtual Worlds.

[11]  Yaser Sheikh,et al.  Constraining dense hand surface tracking with elasticity , 2020, ACM Trans. Graph..

[12]  Michael J. Black,et al.  Learning to Reconstruct 3D Human Pose and Shape via Model-Fitting in the Loop , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[13]  Marc Pollefeys,et al.  Capturing Hands in Action Using Discriminative Salient Points and Physics Simulation , 2015, International Journal of Computer Vision.

[14]  Franck Multon,et al.  Contact Preserving Shape Transfer For Rigging-Free Motion Retargeting , 2019, MIG.

[15]  Dimitrios Tzionas,et al.  Resolving 3D Human Pose Ambiguities With 3D Scene Constraints , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[16]  Ronan Boulic,et al.  Parallel Inverse Kinematics for Multithreaded Architectures , 2016, ACM Trans. Graph..

[17]  Michael J. Black,et al.  Resolving 3 D Human Pose Ambiguities with 3 D Scene Constraints , 2019 .

[18]  Franck Multon,et al.  Contact preserving shape transfer: Retargeting motion from one shape to another , 2020, Comput. Graph..

[19]  Yan Zhang,et al.  Generating 3D People in Scenes Without People , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Jongin Lim,et al.  PMnet: Learning of Disentangled Pose and Movement for Unsupervised Motion Retargeting , 2019, BMVC.

[21]  Nadia Magnenat-Thalmann,et al.  Motion adaptation based on character shape , 2008, Comput. Animat. Virtual Worlds.

[22]  Leonidas J. Guibas,et al.  PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[23]  Cordelia Schmid,et al.  Learning Joint Reconstruction of Hands and Manipulated Objects , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Leonidas J. Guibas,et al.  Contact and Human Dynamics from Monocular Video , 2020, SCA.

[25]  Zoran Popovic,et al.  Physically based motion transformation , 1999, SIGGRAPH.

[26]  Andreas Aristidou,et al.  FABRIK: A fast, iterative solver for the Inverse Kinematics problem , 2011, Graph. Model..

[27]  Hubert P. H. Shum,et al.  Motion adaptation for humanoid robots in constrained environments , 2013, 2013 IEEE International Conference on Robotics and Automation.

[28]  Qi-Xing Huang,et al.  Dense Human Body Correspondences Using Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Ruben Villegas,et al.  Neural Kinematic Networks for Unsupervised Motion Retargetting , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[30]  Jitendra Malik,et al.  Recurrent Network Models for Human Dynamics , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[31]  Michael J. Black,et al.  On Self-Contact and Human Pose , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[32]  Hyeong-Seok Ko,et al.  A physically-based motion retargeting filter , 2005, TOGS.

[33]  Dani Lischinski,et al.  Skeleton-aware networks for deep motion retargeting , 2020, ACM Trans. Graph..

[34]  Franck Multon,et al.  Surface based motion retargeting by preserving spatial relationship , 2018, MIG.