Music mood classification by rhythm and bass-line unit pattern analysis

This paper discusses an approach for the feature extraction for audio mood classification which is an important and tough problem in the field of music information retrieval (MIR). In this task the timbral information has been widely used, however many musical moods are characterized not only by timbral information but also by musical scale and temporal features such as rhythm patterns and bass-line patterns. In particular, modern music pieces mostly have certain fixed rhythm and bass-line patterns, and these patterns can characterize the impression of songs. We have proposed the extraction of rhythm and bass-line patterns, and these unit pattern analysis are combined with statistical feature extraction for mood classification. Experimental results show that the automatically calculated unit pattern information can be used to effectively classify musical mood.

[1]  Emiru Tsunoo,et al.  Rhythm map: Extraction of unit rhythmic patterns and analysis of rhythmic structure from music acoustic signals , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.

[2]  Masataka Goto,et al.  RWC Music Database: Music genre database and musical instrument sound database , 2003, ISMIR.

[3]  Beth Logan,et al.  Mel Frequency Cepstral Coefficients for Music Modeling , 2000, ISMIR.

[4]  Hirokazu Kameoka,et al.  A Real-time Equalizer of Harmonic and Percussive Components in Music Signals , 2008, ISMIR.

[5]  George Tzanetakis,et al.  MARSYAS-0.2: A Case Study in Implementing Music Information Retrieval Systems , 2008 .

[6]  Tao Li,et al.  Detecting emotion in music , 2003, ISMIR.

[7]  George Tzanetakis,et al.  Audio genre classification using percussive pattern clustering combined with timbral features , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[8]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[9]  George Tzanetakis,et al.  Musical genre classification of audio signals , 2002, IEEE Trans. Speech Audio Process..

[10]  Emiru Tsunoo,et al.  Musical Bass-Line Pattern Clustering and Its Application to Audio Genre Classification , 2009, ISMIR.

[11]  Daniel P. W. Ellis,et al.  Support vector machine active learning for music retrieval , 2006, Multimedia Systems.

[12]  Gert R. G. Lanckriet,et al.  Towards musical query-by-semantic-description using the CAL500 data set , 2007, SIGIR.

[13]  Ian Witten,et al.  Data Mining , 2000 .

[14]  Richard A. Harshman,et al.  Indexing by Latent Semantic Analysis , 1990, J. Am. Soc. Inf. Sci..

[15]  Mert Bay,et al.  Creating a Simplified Music Mood Classification Ground-Truth Set , 2007, ISMIR.