Automatic performer identification in commercial monophonic Jazz performances

We present a pattern recognition approach to the task of identifying performers from their interpretative styles. We investigate how professional musicians express their view of the musical content of musical pieces and how to use this information in order to automatically identify performers. We apply sound analysis techniques based on spectral models for extracting deviation patterns of parameters such as pitch, timing, amplitude and timbre characterising both the internal structure of notes and the musical context in which they appear. We describe successful performer identification case studies involving monophonic audio recordings of both score-guided and commercial improvised performances.

[1]  Roger B. Dannenberg,et al.  Combining Instrument and Performance Models for High-Quality Music Synthesis , 1998 .

[2]  Alan D. Bernstein,et al.  The Piecewise-Linear Technique of Electronic Music Synthesis , 1976 .

[3]  Gerhard Widmer,et al.  Machine Discoveries: A Few Simple, Robust Local Expression Principles , 2002 .

[4]  Julius O. Smith,et al.  Spectral modeling synthesis: A sound analysis/synthesis based on a deterministic plus stochastic decomposition , 1990 .

[5]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[6]  Matthew J. Dovey Analysis of Rachmaninoff's Piano Performances Using Inductive Logic Programming (Extended Abstract) , 1995, ECML.

[7]  Eugene Narmour,et al.  The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model , 1990 .

[8]  de MantarasRamon Lopez,et al.  AI and music from composition to expressive performance , 2002 .

[9]  Ibon Saratxaga,et al.  Detection of synthetic speech for the problem of imposture , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[10]  Rafael Ramírez,et al.  Intra-note Features Prediction Model for Jazz Saxophone Performance , 2005, ICMC.

[11]  A. Gabrielsson Music Performance Research at the Millennium , 2003 .

[12]  Emilia Gómez,et al.  Automatic Characterization of Dynamics and Articulation of Expressive Monophone Recordings , 2005 .

[13]  Rafael Ramírez,et al.  A Tool for Generating and Explaining Expressive Music Performances of Monophonic Jazz Melodies , 2006, Int. J. Artif. Intell. Tools.

[14]  B. Repp Diversity and commonality in music performance: an analysis of timing microstructure in Schumann's "Träumerei". , 1992, The Journal of the Acoustical Society of America.

[15]  E. Narmour The Analysis and Cognition of Melodic Complexity: The Implication-Realization Model , 1992 .

[16]  J. Beauchamp,et al.  Fundamental frequency estimation of musical signals using a two‐way mismatch procedure , 1994 .

[17]  Gerhard Widmer,et al.  Discovering Strong Principles of Expressive Music Performance with the PLCG Rule Learning Strategy , 2001, ECML.

[18]  Efstathios Stamatatos,et al.  Automatic identification of music performers with learning ensembles , 2005, Artif. Intell..

[19]  Yves Chauvin,et al.  Backpropagation: theory, architectures, and applications , 1995 .

[20]  Ramón López de Mántaras,et al.  Ai and Music: From Composition to Expressive Performance , 2002, AI Mag..

[21]  Anssi Klapuri,et al.  Melody Description and Extraction in the Context of Music Content Processing , 2003 .

[22]  Nello Cristianini,et al.  An introduction to Support Vector Machines , 2000 .

[23]  Pedro Cano,et al.  Fundamental Frequency Estimation in the SMS analysis , 1998 .

[24]  John Shawe-Taylor,et al.  Using string kernels to identify famous performers from their playing style , 2004, Intell. Data Anal..

[25]  Latifur Khan,et al.  Multimedia Data Mining and Knowledge Discovery , 2006 .

[26]  Johan Sundberg,et al.  Music from Motion: Sound Level Envelopes of Tones Expressing Human Locomotion , 2000 .

[27]  Xavier Serra,et al.  A Data Mining Approach to Expressive Music Performance Modeling , 2007 .

[28]  Isabelle Peretz,et al.  ception and Performance, 26, 1797–1813. Juslin, PN, & Sloboda, JA (2001). Music and emotion: Theory and re-search. New York: Oxford Univer , 2002 .

[29]  Anssi Klapuri,et al.  Sound onset detection by applying psychoacoustic knowledge , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[30]  Gerhard Widmer,et al.  Relational IBL in Music with a New Structural Similarity Measure , 2003, ILP.

[31]  Denis Baggi,et al.  Readings in computer-generated music , 1992 .

[32]  Nell P. McAngusTodd,et al.  The dynamics of dynamics: A model of musical expression , 1992 .

[33]  Elizabeth Martin,et al.  Dictionary of Music , 1982 .

[34]  Luc De Raedt,et al.  Analysis and Prediction of Piano Performances Using Inductive Logic Programming , 1996, Inductive Logic Programming Workshop.

[35]  J. Sundberg,et al.  Overview of the KTH rule system for musical performance. , 2006 .

[36]  Kristoffer Jensen,et al.  ENVELOPE MODEL OF ISOLATED MUSICAL SOUNDS , 1999 .

[37]  W. Apel Harvard Dictionary of music , 1956 .

[38]  Roberto Bresin,et al.  Articulation Rules For Automatic Music Performance , 2001, ICMC.

[39]  Ian H. Witten,et al.  Signal processing for melody transcription , 1995 .

[40]  D. Deutsch,et al.  The Psychology of Music , 1983 .

[41]  Xavier Serra,et al.  A Genetic Rule-Based Model of Expressive Performance for Jazz Saxophone , 2008, Computer Music Journal.

[42]  Roger A. Kendall,et al.  The Communication of Musical Expression , 1990 .

[43]  J. Sloboda,et al.  Music and emotion: Theory and research , 2001 .

[44]  Roger B. Dannenberg,et al.  A Study of Trumpet Envelopes , 1998, ICMC.

[45]  Gerhard Widmer,et al.  Computational Models of Expressive Music Performance: The State of the Art , 2004 .

[46]  A. Gabrielsson The Performance of Music , 1999 .