DanceNet3D: Music Based Dance Generation with Parametric Motion Transformer

In this work, we propose a novel deep learning framework that can generate a vivid dance from a whole piece of music. In contrast to previous works that define the problem as generation of frames of motion state parameters, we formulate the task as a prediction of motion curves between key poses, which is inspired by the animation industry practice. The proposed framework, named DanceNet3D, first generates key poses on beats of the given music and then predicts the in-between motion curves. DanceNet3D adopts the encoder-decoder architecture and the adversarial schemes for training. The decoders in DanceNet3D are constructed on MoTrans, a transformer tailored for motion generation. In MoTrans we introduce the kinematic correlation by the Kinematic Chain Networks, and we also propose the Learned Local Attention module to take the temporal local correlation of human motion into consideration. Furthermore, we propose PhantomDance, the first large-scale dance dataset produced by professional animatiors, with accurate synchronization with music. Extensive experiments demonstrate that the proposed approach can generate fluent, elegant, performative and beat-synchronized 3D dances, which significantly surpasses previous works quantitatively and qualitatively. The project link is https://huiye-tech.github.io/

[1]  John Lasseter,et al.  Principles of traditional animation applied to 3D computer animation , 1987, SIGGRAPH.

[2]  Sanja Fidler,et al.  Learning to Generate Diverse Dance Motions with Transformer , 2020, ArXiv.

[3]  David A. Ross,et al.  Learn to Dance with AIST++: Music Conditioned 3D Dance Generation , 2021, ArXiv.

[4]  F. Thomas,et al.  The illusion of life : Disney animation , 1981 .

[5]  P. Pasquier,et al.  GrooveNet : Real-Time Music-Driven Dance Movement Generation using Artificial Neural Networks , 2017 .

[6]  Nikolaus F. Troje,et al.  AMASS: Archive of Motion Capture As Surface Shapes , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[7]  Dennis G. Zill,et al.  Advanced Engineering Mathematics , 2021, Technometrics.

[8]  Ah Chung Tsoi,et al.  The Graph Neural Network Model , 2009, IEEE Transactions on Neural Networks.

[9]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[10]  Kyogu Lee,et al.  Listen to Dance: Music-driven choreography generation using Autoregressive Encoder-Decoder Network , 2018, ArXiv.

[11]  Richard Williams,et al.  The Animator's Survival Kit : A Manual of Methods, Principles and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators , 2019 .

[12]  Dahua Lin,et al.  Convolutional Sequence Generation for Skeleton-Based Action Synthesis , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[13]  Katsu Yamane,et al.  Retrieval and Generation of Human Motions Based on Associative Model between Motion Symbols and Motion Labels , 2010 .

[14]  Daniel P. W. Ellis,et al.  Beat Tracking by Dynamic Programming , 2007 .

[15]  Jia Jia,et al.  Dance with Melody: An LSTM-autoencoder Approach to Music-oriented Dance Synthesis , 2018, ACM Multimedia.

[16]  Cristian Sminchisescu,et al.  Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  C. Lee Giles,et al.  A Neural Temporal Model for Human Motion Prediction , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Eduardo de Campos Valadares,et al.  Dancing to the music , 2000 .

[19]  Beth Logan,et al.  Mel Frequency Cepstral Coefficients for Music Modeling , 2000, ISMIR.

[20]  Chih-Yi Chiu,et al.  Motion retrieval and its application to motion synthesis , 2004, 24th International Conference on Distributed Computing Systems Workshops, 2004. Proceedings..

[21]  Congyi Wang,et al.  Music2Dance: DanceNet for Music-Driven Dance Generation , 2020, ACM Trans. Multim. Comput. Commun. Appl..