The Continuator: Musical Interaction With Style

We propose a system, the Continuator, that bridges the gap between two classes of traditionally incompatible musical systems: (1) interactive musical systems, limited in their ability to generate stylistically consistent material, and (2) music imitation systems, which are fundamentally not interactive. Our purpose is to allow musicians to extend their technical ability with stylistically consistent, automatically learnt material. This goal requires the ability for the system to build operational representations of musical styles in a real time context. Our approach is based on a Markov model of musical styles augmented to account for musical issues such as management of rhythm, beat, harmony, and imprecision. The resulting system is able to learn and generate music in any style, either in standalone mode, as continuations of musician’s input, or as interactive improvisation back up. Lastly, the very design of the system makes possible new modes of musical collaborative playing. We describe the architecture, implementation issues and experimentations conducted with the system in several real world contexts.

[1]  Shlomo Dubnov,et al.  Automatic Modeling of Musical Style , 2001, ICMC.

[2]  Denis L. Baggi Neurswing: an intelligent workbench for the investigation of swing in jazz , 1991, Computer.

[3]  Matthew Wright,et al.  An Improvisation Environment for Generating Rhythmic Structures Based on North Indian "Tal" Patterns , 1998, ICMC.

[4]  David Zicarelli,et al.  M and Jam Factory , 1987 .

[5]  Jean-Claude Risset,et al.  Real-Time Performance Interaction with a Computer-Controlled Acoustic Piano , 1996 .

[6]  William F. Walker,et al.  A computer participant in musical improvisation , 1997, CHI.

[7]  Ivan Poupyrev,et al.  New interfaces for musical expression , 2001, CHI Extended Abstracts.

[8]  Jan O. Borchers Designing Interactive Music Systems: A Pattern Approach , 1999, HCI.

[9]  Shlomo Dubnov,et al.  Universal Prediction Applied to Stylistic Music Generation , 2002 .

[10]  Rafael Morales-Bueno,et al.  Using Multiattribute Prediction Suffix Graphs to Predict and Generate Music , 2001 .

[11]  Ian H. Witten,et al.  Multiple viewpoint systems for music prediction , 1995 .

[12]  Jean-Gabriel Ganascia,et al.  Simulating Creativity in Jazz Performance , 1994, AAAI.

[13]  Dana Ron,et al.  The power of amnesia: Learning probabilistic automata with variable memory length , 1996, Machine Learning.

[14]  John A. Biles,et al.  Interactive GenJam: Integrating Real-time Performance with a Genetic Algorithm , 1998, ICMC.

[15]  George Frederick McKay Experimental Music , 1959 .

[16]  Shlomo Dubnov,et al.  Guessing the Composer's Mind: Applying Universal Prediction to Musical Style , 1999, ICMC.

[17]  David Cope,et al.  Experiments In Musical Intelligence , 1996 .

[18]  Rafael Morales Bueno,et al.  Using Multiattribute Prediction Suffix Graphs to Predict and Generate Music , 2001, Computer Music Journal.