Automatic Performer Identification in Celtic Violin Audio Recordings

Abstract We present a machine learning approach to the problem of identifying performers from their interpretative styles. In particular, we investigate how violinists express their view of the musical content in audio recordings and feed this information to a number of machine learning techniques in order to induce classifiers capable of identifying the interpreters. We apply sound analysis techniques based on spectral models for extracting expressive features such as pitch, timing, and amplitude representing both note characteristics and the musical context in which they appear. Our results indicate that the features extracted contain sufficient information to distinguish the considered performers, and the explored machine learning methods are capable of learning the expressive patterns that characterize each of the interpreters.

[1]  B. Repp Diversity and commonality in music performance: an analysis of timing microstructure in Schumann's "Träumerei". , 1992, The Journal of the Acoustical Society of America.

[2]  J. Beauchamp,et al.  Fundamental frequency estimation of musical signals using a two‐way mismatch procedure , 1994 .

[3]  P. Laukka,et al.  Impact of intended emotion intensity on cue utilization and decoding accuracy in vocal expression of emotion. , 2000, Emotion.

[4]  Gerhard Widmer,et al.  Discovering Strong Principles of Expressive Music Performance with the PLCG Rule Learning Strategy , 2001, ECML.

[5]  John Shawe-Taylor,et al.  Using string kernels to identify famous performers from their playing style , 2004, Intell. Data Anal..

[6]  Gerhard Widmer,et al.  Computational Models of Expressive Music Performance: The State of the Art , 2004 .

[7]  Efstathios Stamatatos,et al.  Automatic identification of music performers with learning ensembles , 2005, Artif. Intell..

[8]  Nell P. McAngusTodd,et al.  The dynamics of dynamics: A model of musical expression , 1992 .

[9]  Anssi Klapuri,et al.  Melody Description and Extraction in the Context of Music Content Processing , 2003 .

[10]  Yves Chauvin,et al.  Backpropagation: theory, architectures, and applications , 1995 .

[11]  Gerhard Widmer,et al.  Relational IBL in Music with a New Structural Similarity Measure , 2003, ILP.

[12]  E. Narmour The Analysis and Cognition of Melodic Complexity: The Implication-Realization Model , 1992 .

[13]  Gerhard Widmer,et al.  Machine Discoveries: A Few Simple, Robust Local Expression Principles , 2002 .

[14]  Roberto Bresin,et al.  Articulation Rules For Automatic Music Performance , 2001, ICMC.

[15]  Carlo Drioli,et al.  Modeling and control of expressiveness in music performance , 2004, Proceedings of the IEEE.

[16]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[17]  A. Gabrielsson Music Performance Research at the Millennium , 2003 .

[18]  Rafael Ramírez,et al.  A Tool for Generating and Explaining Expressive Music Performances of Monophonic Jazz Melodies , 2006, Int. J. Artif. Intell. Tools.

[19]  Rafael Ramírez,et al.  Performer Identification in Celtic Violin Recordings , 2008, ISMIR.

[20]  Xavier Serra,et al.  A Genetic Rule-Based Model of Expressive Performance for Jazz Saxophone , 2008, Computer Music Journal.

[21]  Esteban Maestre,et al.  Performance-Based Interpreter Identification in Saxophone Audio Recordings , 2007, IEEE Transactions on Circuits and Systems for Video Technology.

[22]  Xavier Serra,et al.  Automatic performer identification in commercial monophonic Jazz performances , 2010, Pattern Recognit. Lett..

[23]  Johan Sundberg,et al.  Musical Performance: A Synthesis-by-Rule Approach , 1983 .

[24]  Ian H. Witten,et al.  Signal processing for melody transcription , 1995 .

[25]  John A. Sloboda,et al.  The performance of music , 1986 .

[26]  Ramón López de Mántaras,et al.  Ai and Music: From Composition to Expressive Performance , 2002, AI Mag..

[27]  Anssi Klapuri,et al.  Sound onset detection by applying psychoacoustic knowledge , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[28]  Matthew J. Dovey Analysis of Rachmaninoff's Piano Performances Using Inductive Logic Programming (Extended Abstract) , 1995, ECML.

[29]  Dustin Boswell,et al.  Introduction to Support Vector Machines , 2002 .

[30]  Eugene Narmour,et al.  The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model , 1990 .

[31]  Miguel Molina-Solana,et al.  Identifying violin performers by their expressive trends , 2010, Intell. Data Anal..

[32]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[33]  Sergio Canazza,et al.  Perceptual Analysis of the Musical Expressive Intention in a Clarinet Performance , 1996, Joint International Conference on Cognitive and Systematic Musicology.

[34]  Luc De Raedt,et al.  Analysis and Prediction of Piano Performances Using Inductive Logic Programming , 1996, Inductive Logic Programming Workshop.

[35]  J. Sundberg,et al.  Overview of the KTH rule system for musical performance. , 2006 .

[36]  Roger B. Dannenberg,et al.  Combining Instrument and Performance Models for High-Quality Music Synthesis , 1998 .

[37]  Roger A. Kendall,et al.  The Communication of Musical Expression , 1990 .