Learning-Based Sphere Nonlinear Interpolation for Motion Synthesis

Motion synthesis technology can produce natural and coordinated motion data without a motion capture process, which is complex and costly. Current motion synthesis methods usually provide a few interfaces to avoid the arbitrariness of the synthesis process, but this actually reduces the understandability of the synthesis process. In this paper, we propose a learning-based Sphere nonlinear interpolation (Snerp) model that can generate natural in-between motions in terms of a given start–end frame pair. Variety of the input frame pairs will enrich the diversity of the generated motions. The angle speed of natural human motion is not uniform and presents different change rules (we call them motion patterns) for different motions, so we first extract the motion patterns and then build the relation between motion pattern space and frame pair space via a paired dictionary learning process. After learning, we estimate the motion pattern according to the representation of a given start–end frame pair on the frame pair dictionary. We select several different types of start–end frame pairs from the real motion sequences as the testing data and good results of both objective and subjective evaluations on the generated motions demonstrate the superior performance of Snerp.

[1]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[2]  Sergey Levine,et al.  Learning Complex Neural Network Policies with Trajectory Optimization , 2014, ICML.

[3]  Junhui Hou,et al.  Human motion capture data recovery using trajectory-based matrix completion , 2013 .

[4]  Yuichi Motai,et al.  Tracking Human Motion With Multichannel Interacting Multiple Model , 2013, IEEE Transactions on Industrial Informatics.

[5]  Lei Feng,et al.  Keyframe Extraction for Human Motion Capture Data Based on Joint Kernel Sparse Representation , 2017, IEEE Transactions on Industrial Electronics.

[6]  Wen Gao,et al.  Speech Emotion Recognition Using Deep Convolutional Neural Network and Discriminant Temporal Pyramid Matching , 2018, IEEE Transactions on Multimedia.

[7]  Xiaogang Wang,et al.  Hybrid Deep Learning for Face Verification , 2013, 2013 IEEE International Conference on Computer Vision.

[8]  Hongdong Li,et al.  Combining Multiple Manifold-Valued Descriptors for Improved Object Recognition , 2013, 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA).

[9]  Daniel Thalmann,et al.  Interactive low-dimensional human motion synthesis by combining motion models and PIK , 2007 .

[10]  Zoran Popović,et al.  Motion fields for interactive character locomotion , 2010, SIGGRAPH 2010.

[11]  Pascal Fua,et al.  Style‐Based Motion Synthesis † , 2004, Comput. Graph. Forum.

[12]  Éric Marchand,et al.  Pose Estimation for Augmented Reality: A Hands-On Survey , 2016, IEEE Transactions on Visualization and Computer Graphics.

[13]  Yueting Zhuang,et al.  Exploiting temporal stability and low-rank structure for motion capture data refinement , 2014, Inf. Sci..

[14]  Taku Komura,et al.  A Deep Learning Framework for Character Motion Synthesis and Editing , 2016, ACM Trans. Graph..

[15]  Yen-Lin Chen,et al.  Interactive generation of human animation with deformable motion models , 2009, TOGS.

[16]  Bilge Mutlu,et al.  Authoring directed gaze for full-body motion capture , 2016, ACM Trans. Graph..

[17]  Takahiro Nozaki,et al.  Recognition of Grasping Motion Based on Modal Space Haptic Information Using DP Pattern-Matching Algorithm , 2013, IEEE Transactions on Industrial Informatics.

[18]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.

[19]  Xin Liu,et al.  Markerless Human–Manipulator Interface Using Leap Motion With Interval Kalman Filter and Improved Particle Filter , 2016, IEEE Transactions on Industrial Informatics.

[20]  ChangHwan Kim,et al.  Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation , 2015, IEEE Transactions on Industrial Electronics.

[21]  Michael F. Cohen,et al.  Verbs and Adverbs: Multidimensional Motion Interpolation , 1998, IEEE Computer Graphics and Applications.

[22]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, ACM Trans. Graph..

[23]  Thomas S. Huang,et al.  Image Super-Resolution Via Sparse Representation , 2010, IEEE Transactions on Image Processing.

[24]  Lei Feng,et al.  Human Motion Segmentation via Robust Kernel Sparse Subspace Clustering , 2018, IEEE Transactions on Image Processing.

[25]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[26]  Taku Komura,et al.  Phase-functioned neural networks for character control , 2017, ACM Trans. Graph..

[27]  Thomas S. Huang,et al.  Coupled Dictionary Training for Image Super-Resolution , 2012, IEEE Transactions on Image Processing.

[28]  Jitendra Malik,et al.  Recurrent Network Models for Human Dynamics , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[29]  Michael Gleicher,et al.  Parametric motion graphs , 2007, SI3D.

[30]  Alla Safonova,et al.  Human Motion Synthesis with Optimization‐based Graphs , 2010, Comput. Graph. Forum.

[31]  Tomohiko Mukai,et al.  Geostatistical motion interpolation , 2005, SIGGRAPH '05.

[32]  Qingshan Liu,et al.  Nonlinear Low-Rank Matrix Completion for Human Motion Recovery , 2018, IEEE Transactions on Image Processing.