An Evaluation of Musical Score Characteristics for Automatic Classification of Composers

Although humans can distinguish between different types of music, automated music classification is a great challenge. Within the last decade, numerous studies have been conducted on the subject, using both audio and score analysis (Kranenburg and Baker 2004; Manaris et al. 2005; Laurier and Herrera 2007; Weihs et al. 2007; Raphael 2008; Laurier et al. 2009). The classifications in these studies were done mostly by inference methods and/or machine-learning methods. The results have been quite modest (Kranenburg and Baker 2004; Laurier et al. 2009). As music can be classified in many ways, studies have focused on diverse classification targets. Kranenburg and Baker (2004) have shown that it is possible to automatically recognize musical style from compositions of five well known 18thcentury composers. Numerous algorithms have been proposed to detect important musical features (melodic, rhythmic, and harmonic) with data mining and machine-learning techniques in large corpora of scores (Hartmann et al. 2007). Geertzen and Zaanen (2008) presented an approach to automatic composer recognition based on learning recurring patterns in music by grammatical inference. Music can be represented in audio or as notation. Existing classification studies encode features with different representations. Clearly, the nature of the representation is a major determinant of the success of the classification. Therefore, the difficulty that present approaches have in classifying composers by their compositions stems from using features that do not sufficently capture the differences between the composers. Manaris et al. (2005), using a new set of features (metrics), achieved a classification accuracy of 94 percent for five composers. Their experiments, however, seem questionable: They performed only one holdout test instead of multiple cross-validation

[1]  Julius O. Smith,et al.  J.O. Smith III Comments on Sullivan Karplus-Strong Article , 1991 .

[2]  David Huron,et al.  Humdrum and Kern : selective feature encoding , 1997 .

[3]  Eleanor Selfridge-Field,et al.  Beyond MIDI: the handbook of musical codes , 1997 .

[4]  Jeroen Geertzen,et al.  Composer classification using grammatical inference , 2008 .

[5]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[6]  Christopher Raphael,et al.  A Classifier-Based Approach to Score-Guided Source Separation of Musical Audio , 2008, Computer Music Journal.

[7]  Petri Toiviainen,et al.  Exploring relationships between audio features and emotion in music , 2009 .

[8]  E. Backer,et al.  Musical style recognition - a quantitative approach , 2004 .

[9]  Bill Z. Manaris,et al.  Armonique: Experiments in Content-Based Similarity Retrieval using Power-Law Melodic and Timbre Metrics , 2008, ISMIR.

[10]  Daniel Dominic Sleator,et al.  Modeling Meter and Harmony: A Preference-Rule Approach , 1999, Computer Music Journal.

[11]  Perfecto Herrera,et al.  Audio music mood classification using support vector machine , 2007 .

[12]  Penousal Machado,et al.  Zipf's Law, Music Classification, and Aesthetics , 2005, Computer Music Journal.

[13]  Stephen R. Garner,et al.  WEKA: The Waikato Environment for Knowledge Analysis , 1996 .

[14]  Yoram Reich,et al.  Evaluating machine learning models for engineering problems , 1999, Artif. Intell. Eng..

[15]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques with Java implementations , 2002, SGMD.

[16]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[17]  Claus Weihs,et al.  Classification in music research , 2007, Adv. Data Anal. Classif..

[18]  A. Nürnberger,et al.  Interactive data mining & machine learning techniques for musicology , 2007 .