Studying a creative act with computers: Music performance studies with automated discovery methods

Abstract The purpose of this article is to demonstrate how advanced computer methods may be able to provide new insights into a complex creative activity such as music performance. The context is an inter-disciplinary research project in which Artificial Intelligence (AI) methods are used to analyse patterns in performances by human artists. In asking how the computer can take us closer to an understanding of creativity in music performance, we identify two pertinent research strategies within our project: the use of machine learning algorithms that try to discover common performance principles and thus help separate the “rationally explainable” aspects of performance from the more genuinely “creative” ones, and the use of data mining methods that can discover, visualise and describe performance patterns that seem to be characteristic of the style of particular artists and thus may be more directly related to their individual creativity. Some preliminary results are briefly presented that are indicative of the kinds of discoveries these algorithms can make. Some general issues regarding (musical) creativity and its relation to Artificial Intelligence are also briefly discussed.

[1]  Stanley F. Chen,et al.  Bayesian Grammar Induction for Language Modeling , 1995, ACL.

[2]  Werner Goebl,et al.  Visualizing Expressive Performance in Tempo—Loudness Space , 2003, Computer Music Journal.

[3]  Efstathios Stamatatos Quantifying the Differences between Music Performers: Score vs. Norm , 2002, ICMC.

[4]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[5]  Rá Ul,et al.  Machine Discovery in Chemistry: New Results , 1995 .

[6]  Gerhard Widmer,et al.  Real Time Tracking and Visualisation of Musical Expression , 2002, ICMAI.

[7]  Simon Colton,et al.  Refactorable Numbers - A Machine Invention , 1999 .

[8]  Gerhard Widmer,et al.  Inductive Learning of General and Robust Local Expression Principles , 2001, International Conference on Mathematics and Computing.

[9]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[10]  Efstathios Stamatatos,et al.  Music Performer Recognition Using an Ensemble of Simple Classifiers , 2002, ECAI.

[11]  Rá Ul,et al.  A New Theorem in Particle Physics Enabled by Machine Discovery , 2022 .

[12]  Karl Rihaczek,et al.  1. WHAT IS DATA MINING? , 2019, Data Mining for the Social Sciences.

[13]  W. L. Windsor,et al.  Expressive Timing and Dynamics in Real and Artificial Musical Performances: Using an Algorithm as an Analytical Tool , 1997 .

[14]  Anders Friberg,et al.  A Quantitative Rule System for Musical Performance , 1995 .

[15]  Gerhard Widmer,et al.  Discovering Strong Principles of Expressive Music Performance with the PLCG Rule Learning Strategy , 2001, ECML.

[16]  S. Muggleton,et al.  Protein secondary structure prediction using logic-based machine learning. , 1992, Protein engineering.

[17]  JOHANNES FÜRNKRANZ,et al.  Separate-and-Conquer Rule Learning , 1999, Artificial Intelligence Review.

[18]  Hugo Fastl,et al.  Psychoacoustics: Facts and Models , 1990 .

[19]  Jordi Bonada,et al.  Content-based transformations , 2003 .

[20]  Neil P. McAngus Todd,et al.  Towards a cognitive theory of expression: The performance and perception of rubato , 1989 .

[21]  Nell P. McAngusTodd,et al.  The dynamics of dynamics: A model of musical expression , 1992 .

[22]  Simon Dixon,et al.  Automatic Extraction of Tempo and Beat From Expressive Performances , 2001 .

[23]  A. Gabrielsson The Performance of Music , 1999 .

[24]  Gerhard Widmer,et al.  Large-scale Induction of Expressive Performance Rules: First Quantitative Results , 2000, ICMC.

[25]  Werner Goebl,et al.  Representing expressive performance in tempo-loudness space , 2002 .

[26]  Analysis of Musical Content in Digital Audio , 2004 .

[27]  Gerhard Widmer,et al.  Playing Mozart by Analogy: Learning Multi-level Timing and Dynamics Strategies , 2003 .

[28]  B. Repp Diversity and commonality in music performance: an analysis of timing microstructure in Schumann's "Träumerei". , 1992, The Journal of the Acoustical Society of America.

[29]  Gerhard Widmer,et al.  The Performance Worm: Real Time Visualisation of Expression based on Langner's Tempo-Loudness Animation , 2002, ICMC.