HUMAN PERCEPTION AND COMPUTER EXTRACTION OF MUSICAL BEAT STRENGTH

Musical signals exhibit periodic temporal structure that create the sensation of rhythm. In order to model, analyze, and retrieve musical signals it is important to automatically extract rhythmic information. To somewhat simplify the problem, automatic algorithms typically only extract information about the main beat of the signal which can be loosely defined as the regular periodic sequence of pulses corresponding to where a human would tap his foot while listening to the music. In these algorithms, the beat is characterized by its frequency (tempo), phase (accent locations) and a confidence measure about its detection. The main focus of this paper is the concept of Beat Strength, which will be loosely defined as one rhythmic characteristic that could allow to discriminate between two pieces of music having the same tempo. Using this definition, we might say that a piece of Hard Rock has a higher beat strength than a piece of Classical Music at the same tempo. Characteristics related to Beat Strength have been implicitely used in automatic beat detection algorithms and shown to be as important as tempo information for music classification and retrieval. In the work presented in this paper, a user study exploring the perception of Beat Strength was conducted and the results were used to calibrate and explore automatic Beat Strength measures based on the calculation of Beat Histograms.

[1]  George Tzanetakis,et al.  Audio Analysis using the Discrete Wavelet Transform , 2001 .

[2]  David G. Stork,et al.  Pattern Classification , 1973 .

[3]  Shingo Uchihashi,et al.  The beat spectrum: a new approach to rhythm analysis , 2001, IEEE International Conference on Multimedia and Expo, 2001. ICME 2001..

[4]  Yoichi Muraoka,et al.  Musical understanding at the beat level: real-time beat tracking for audio signals , 1998 .

[5]  George Tzanetakis,et al.  Musical genre classification of audio signals , 2002, IEEE Trans. Speech Audio Process..

[6]  George Tzanetakis,et al.  MARSYAS: a framework for audio analysis , 1999, Organised Sound.

[7]  Eric D. Scheirer,et al.  Tempo and beat analysis of acoustic musical signals. , 1998, The Journal of the Acoustical Society of America.

[8]  J. Seppanen,et al.  Tatum grid analysis of musical signals , 2001, Proceedings of the 2001 IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics (Cat. No.01TH8575).

[9]  Jean Laroche,et al.  Estimating tempo, swing and beat locations in audio recordings , 2001, Proceedings of the 2001 IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics (Cat. No.01TH8575).

[10]  Shigeo Abe DrEng Pattern Classification , 2001, Springer London.