DanceFormer: Music Conditioned 3D Dance Generation with Parametric Motion Transformer

Generating 3D dances from music is an emerged research task that benefits a lot of applications in vision and graphics. Previous works treat this task as sequence generation, however, it is challenging to render a music-aligned long-term sequence with high kinematic complexity and coherent movements. In this paper, we reformulate it by a two-stage process, i.e., a key pose generation and then an in-between parametric motion curve prediction, where the key poses are easier to be synchronized with the music beats and the parametric curves can be efficiently regressed to render fluent rhythm-aligned movements. We named the proposed method as DanceFormer, which includes two cascading kinematics-enhanced transformer-guided networks (called DanTrans) that tackle each stage, respectively. Furthermore, we propose a largescale music conditioned 3D dance dataset, called PhantomDance, that is accurately labeled by experienced animators rather than reconstruction or motion capture. This dataset also encodes dances as key poses and parametric motion curves apart from pose sequences, thus benefiting the training of our DanceFormer. Extensive experiments demonstrate that the proposed method, even trained by existing datasets, can generate fluent, performative, and music-matched 3D dances that surpass previous works quantitatively and qualitatively. Moreover, the proposed DanceFormer, together with the PhantomDance dataset, are seamlessly compatible with industrial animation software, thus facilitating the adaptation for various downstream applications.

[1]  P. Pasquier,et al.  GrooveNet : Real-Time Music-Driven Dance Movement Generation using Artificial Neural Networks , 2017 .

[2]  John Lasseter,et al.  Principles of traditional animation applied to 3D computer animation , 1987, SIGGRAPH.

[3]  Michael J. Black,et al.  SMPL: A Skinned Multi-Person Linear Model , 2023 .

[4]  Beth Logan,et al.  Mel Frequency Cepstral Coefficients for Music Modeling , 2000, ISMIR.

[5]  Sanja Fidler,et al.  Learning to Generate Diverse Dance Motions with Transformer , 2020, ArXiv.

[6]  Nikolaus F. Troje,et al.  AMASS: Archive of Motion Capture As Surface Shapes , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[7]  Christopher D. Manning,et al.  Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.

[8]  Congyi Wang,et al.  Music2Dance: DanceNet for Music-Driven Dance Generation , 2020, ACM Trans. Multim. Comput. Commun. Appl..

[9]  Katsu Yamane,et al.  Retrieval and Generation of Human Motions Based on Associative Model between Motion Symbols and Motion Labels , 2010 .

[10]  F. Thomas,et al.  The illusion of life : Disney animation , 1981 .

[11]  J. Goodridge Rhythm and Timing of Movement in Performance: Drama, Dance and Ceremony , 1999 .

[12]  Daniel P. W. Ellis,et al.  Beat Tracking by Dynamic Programming , 2007 .

[13]  Dariush Derakhshani Introducing Autodesk Maya 2012 , 2010 .

[14]  Chih-Yi Chiu,et al.  Motion retrieval and its application to motion synthesis , 2004, 24th International Conference on Distributed Computing Systems Workshops, 2004. Proceedings..

[15]  Ah Chung Tsoi,et al.  The Graph Neural Network Model , 2009, IEEE Transactions on Neural Networks.

[16]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[17]  Mohan S. Kankanhalli,et al.  DeepDance: Music-to-Dance Motion Choreography With Adversarial Learning , 2020, IEEE Transactions on Multimedia.

[18]  Kyogu Lee,et al.  Listen to Dance: Music-driven choreography generation using Autoregressive Encoder-Decoder Network , 2018, ArXiv.

[19]  David A. Ross,et al.  AI Choreographer: Music Conditioned 3D Dance Generation with AIST++ , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[20]  Richard H. Bartels,et al.  Interpolating splines with local tension, continuity, and bias control , 1984, SIGGRAPH.

[21]  Richard Williams,et al.  The Animator's Survival Kit : A Manual of Methods, Principles and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators , 2019 .

[22]  Dahua Lin,et al.  Convolutional Sequence Generation for Skeleton-Based Action Synthesis , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[23]  Jia Jia,et al.  Dance with Melody: An LSTM-autoencoder Approach to Music-oriented Dance Synthesis , 2018, ACM Multimedia.

[24]  Cristian Sminchisescu,et al.  Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[25]  C. Lee Giles,et al.  A Neural Temporal Model for Human Motion Prediction , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Eduardo de Campos Valadares,et al.  Dancing to the music , 2000 .