Dancing‐to‐Music Character Animation

In computer graphics, considerable research has been conducted on realistic human motion synthesis. However, most research does not consider human emotional aspects, which often strongly affect human motion. This paper presents a new approach for synthesizing dance performance matched to input music, based on the emotional aspects of dance performance. Our method consists of a motion analysis, a music analysis, and a motion synthesis based on the extracted features. In the analysis steps, motion and music feature vectors are acquired. Motion vectors are derived from motion rhythm and intensity, while music vectors are derived from musical rhythm, structure, and intensity. For synthesizing dance performance, we first find candidate motion segments whose rhythm features are matched to those of each music segment, and then we find the motion segment set whose intensity is similar to that of music segments. Additionally, our system supports having animators control the synthesis process by assigning desired motion segments to the specified music segments. The experimental results indicate that our method actually creates dance performance as if a character was listening and expressively dancing to the music.

[1]  Christoph Bregler,et al.  Motion capture assisted animation: texturing and synthesis , 2002, ACM Trans. Graph..

[2]  William A. Sethares,et al.  Beat tracking of musical performances using low-level audio features , 2005, IEEE Transactions on Speech and Audio Processing.

[3]  Katsu Yamane,et al.  Synthesizing animations of human manipulation tasks , 2004, ACM Trans. Graph..

[4]  David Rosenthal,et al.  Emulation of human rhythm perception , 1992 .

[5]  Jovan Popovic,et al.  Example-based control of human motion , 2004, SCA '04.

[6]  Okan Arikan,et al.  Interactive motion generation from examples , 2002, ACM Trans. Graph..

[7]  Michael Gleicher,et al.  Retargetting motion to new characters , 1998, SIGGRAPH.

[8]  Eric D. Scheirer,et al.  Tempo and beat analysis of acoustic musical signals. , 1998, The Journal of the Acoustical Society of America.

[9]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, ACM Trans. Graph..

[10]  Jovan Popovic,et al.  Style translation for human motion , 2005, ACM Trans. Graph..

[11]  Curtis Roads,et al.  The Computer Music Tutorial , 1996 .

[12]  Robert J. Owens ACM SIGGRAPH 95 , 1996 .

[13]  Sung Yong Shin,et al.  Rhythmic-motion synthesis based on motion-beat analysis , 2003, ACM Trans. Graph..

[14]  R. Laban,et al.  The mastery of movement , 1950 .

[15]  Masataka Goto,et al.  An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds , 2001 .

[16]  E. Chew Modeling Tonality: Applications to Music Cognition , 2001 .

[17]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, SIGGRAPH 2004.

[18]  Q. Summerfield Book Review: Auditory Scene Analysis: The Perceptual Organization of Sound , 1992 .

[19]  Neil P. McAngus Todd,et al.  The auditory “Primal Sketch”: A multiscale model of rhythmic grouping , 1994 .

[20]  Judith C. Brown Calculation of a constant Q spectral transform , 1991 .

[21]  Atsushi Nakazawa,et al.  Detecting dance motion structure through music analysis , 2004, Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings..

[22]  Albert S. Bregman,et al.  The Auditory Scene. (Book Reviews: Auditory Scene Analysis. The Perceptual Organization of Sound.) , 1990 .

[23]  Mohan S. Kankanhalli,et al.  Automatic music summarization in compressed domain , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[24]  Lucas Kovar,et al.  Motion graphs , 2002, SIGGRAPH Classes.

[25]  Matthew Stone,et al.  Speaking with hands: creating animated conversational characters from recordings of human performance , 2004, ACM Trans. Graph..

[26]  Lie Lu,et al.  Repeating pattern discovery from acoustic musical signals , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[27]  Harry Shum,et al.  Motion texture: a two-level statistical model for character motion synthesis , 2002, ACM Trans. Graph..

[28]  Lie Lu,et al.  Automated extraction of music snippets , 2003, ACM Multimedia.

[29]  T. Kailath The Divergence and Bhattacharyya Distance Measures in Signal Selection , 1967 .

[30]  Tomomasa Sato,et al.  Analysis of Impression of Robot Bodily Expression , 2002, J. Robotics Mechatronics.

[31]  Meinard Müller,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH '05.

[32]  Kari Pulli,et al.  Style translation for human motion , 2005, SIGGRAPH 2005.

[33]  Atsushi Nakazawa,et al.  Synthesizing dance performance using musical and motion features , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[34]  Peter Desain,et al.  On tempo tracking: Tempogram Representation and Kalman filtering , 2000, ICMC.

[35]  Lucas Kovar,et al.  Motion Graphs , 2002, ACM Trans. Graph..

[36]  David A. Forsyth,et al.  Motion synthesis from annotations , 2003, ACM Trans. Graph..

[37]  Aaron Hertzmann,et al.  Style machines , 2000, SIGGRAPH 2000.

[38]  David Felix Rosenthal Machine rhythm: computer emulation of human rhythm perception , 1992 .

[39]  Jeanne Dunning Speaking With Hands , 2004 .

[40]  Guy J. Brown,et al.  Computational auditory scene analysis: Exploiting principles of perceived continuity , 1993, Speech Commun..

[41]  Zoran Popovic,et al.  Motion warping , 1995, SIGGRAPH.

[42]  Katsu Yamane,et al.  Synthesizing animations of human manipulation tasks , 2004, SIGGRAPH 2004.

[43]  Lance Williams,et al.  Motion signal processing , 1995, SIGGRAPH.

[44]  Tido Röder,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH 2005.

[45]  Beth Logan,et al.  Music summarization using key phrases , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).

[46]  Sung Yong Shin,et al.  A hierarchical approach to interactive motion editing for human-like figures , 1999, SIGGRAPH.

[47]  Gabriel Pablo Nava,et al.  Finding music beats and tempo by using an image processing technique , .

[48]  Aaron Hertzmann,et al.  Style-based inverse kinematics , 2004, SIGGRAPH 2004.

[49]  Peter Desain,et al.  Quantization of musical time: a connectionist approach , 1989 .

[50]  Jessica K. Hodgins,et al.  Interactive control of avatars animated with human motion data , 2002, SIGGRAPH.