Evaluating the Genre Classification Performance of Lyrical Features Relative to Audio, Symbolic and Cultural Features

T his paper describes experimental research investigating the genre classification utility of combining features extracted from lyrical, audio, symbolic and cultural sources of musical information. It was found that cultural features consisting of information extracted from both web searches and mined listener tags were particularly effective, with the result that classification accuracies were achieved that compare favorably with the current state of the art of musical genre classification. It was also found that features extracted from lyrics were less effective than the other feature types. Finally, it was found that, with some exceptions, combining feature types does improve classification performance. The new lyricFetcher and jLyrics software are also presented as tools that can be used as a framework for developing more effective classification methodologies based on lyrics in the future.

[1]  Cory McKay,et al.  Automatic music classification with jmir , 2010 .

[2]  Nando de Freitas,et al.  "Name That Song!" A Probabilistic Approach to Querying on Music and Text , 2002, NIPS.

[3]  Andreas Rauber,et al.  Integration of Text and Audio Features for Genre Classification in Music Information Retrieval , 2007, ECIR.

[4]  Ichiro Fujinaga,et al.  Using jWebMiner 2.0 to Improve Music Classification Performance by Combining Different Types of Features Mined from the Web , 2010, ISMIR.

[5]  J. Carolan Name that song , 2006 .

[6]  Bin Wei,et al.  Keyword Generation for Lyrics , 2007, ISMIR.

[7]  Andreas Rauber,et al.  Improving Genre Classification by Combination of Audio and Symbolic Descriptors Using a Transcription Systems , 2007, ISMIR.

[8]  Jens Grivolla,et al.  Multimodal Music Mood Classification Using Audio and Lyrics , 2008, 2008 Seventh International Conference on Machine Learning and Applications.

[9]  Daniel G. Brown,et al.  Automatic Detection of Internal and Imperfect Rhymes in Rap Lyrics , 2009, ISMIR.

[10]  Paris Smaragdis,et al.  Combining Musical and Cultural Features for Intelligent Style Detection , 2002, ISMIR.

[11]  Markus Koppenberger,et al.  Natural language processing of lyrics , 2005, ACM Multimedia.

[12]  Jan H. M. Korst,et al.  Efficient Lyrics Extraction from the Web , 2006, ISMIR.

[13]  Beth Logan,et al.  Semantic analysis of song lyrics , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[14]  Peter Knees,et al.  Oh Oh Oh Whoah! Towards Automatic Topic Detection In Song Lyrics , 2008, ISMIR.

[15]  Andreas F. Ehmann,et al.  Lyric Text Mining in Music Mood Classification , 2009, ISMIR.

[16]  Andreas Rauber,et al.  Combination of audio and lyrics features for genre classification in digital audio collections , 2008, ACM Multimedia.

[17]  Tao Li,et al.  Semisupervised learning from different information sources , 2005, Knowledge and Information Systems.

[18]  Ichiro Fujinaga,et al.  Combining Features Extracted from Audio, Symbolic and Cultural Sources , 2008, ISMIR.

[19]  Peter Knees,et al.  Multiple Lyrics Alignment: Automatic Retrieval of Song Lyrics , 2005, ISMIR.