Notice of RetractionMulti-modal music genre classification approach

As a fundamental and critical component of music information retrieval (MIR) systems, automatically classifying music by genre is a challenging problem. The traditional approaches which solely depending on low-level audio features may not be able to obtain satisfactory results. In recent years, the social tags have emerged as an important way to provide information about resources on the web. So, in this paper we propose a novel multi-modal music genre classification approach which uses the acoustic features and the social tags together for classifying music by genre. For the audio content-based classification, we design a new feature selection algorithm called IBFFS (Interaction Based Forward Feature Selection). This algorithm selects the features depending on the pre-computed rules which considering the interaction between the different features. In addition, we are interested in another aspect, that is how performing automatic music genre classification depending on the available tag data. Two classification methods based on the social tags (including music-tags and artist-tags) which crawled from website Last.fm are developed in our work: (1) we use the generative probabilistic model Latent Dirichlet Allocation (LDA) to analyze the music-tags. Then, we can obtain the probability of every tag belonging to each music genre. (2) The starting point of the second method is that music's artist is often associated with music genres more closely. Therefore, we can compute the similarity between the artist-tag vectors to infer which genre the music belongs to. At last, our experimental results demonstrate the benefit of our multi-modal music genre classification approach.

[1]  Gonçalo Marques,et al.  Automatic Music Genre Classification Using a Hierarchical Clustering and a Language Model Approach , 2009, 2009 First International Conference on Advances in Multimedia.

[2]  Wolfgang Nejdl,et al.  The Benefit of Using Tag-Based Profiles , 2007, 2007 Latin American Web Conference (LA-WEB 2007).

[3]  Ron Kohavi,et al.  Useful Feature Subsets and Rough Set Reducts , 1994 .

[4]  Joan Serrà,et al.  Music Mood Representations from Social Tags , 2009, ISMIR.

[5]  Michael I. Jordan,et al.  Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..

[6]  Huan Liu,et al.  Neural-network feature selector , 1997, IEEE Trans. Neural Networks.

[7]  Chang Dong Yoo,et al.  Music genre classification using novel features and a weighted voting method , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[8]  George Tzanetakis,et al.  Musical genre classification of audio signals , 2002, IEEE Trans. Speech Audio Process..

[9]  Wolfgang Nejdl,et al.  The Benefit of Using Tag-Based Profiles , 2007 .

[10]  Tao Li,et al.  A comparative study on content-based music genre classification , 2003, SIGIR.

[11]  Shian-Shyong Tseng,et al.  A two-phase feature selection method using both filter and wrapper , 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028).

[12]  Wolfgang Nejdl,et al.  Improving music genre classification using collaborative tagging data , 2009, WSDM '09.

[13]  Òscar Celma,et al.  The Quest for Musical Genres: Do the Experts and the Wisdom of Crowds Agree? , 2008, ISMIR.