A Meta-Instrument for Interactive, On-the-Fly Machine Learning

Supervised learning methods have long been used to allow musical interface designers to generate new mappings by example. We propose a method for harnessing machine learning algorithms within a radically interactive paradigm, in which the designer may repeatedly generate examples, train a learner, evaluate outcomes, and modify parameters in real-time within a single software environment. We describe our meta-instrument, the Wekinator, which allows a user to engage in on-the-fly learning using arbitrary control modalities and sound synthesis environments. We provide details regarding the system implementation and discuss our experiences using the Wekinator for experimentation and performance.

[1]  David Wessel,et al.  Neural networks for simultaneous classification and parameter estimation in musical instrument control , 1992, Defense, Security, and Sensing.

[2]  Diomidis Spinellis The Tools We Use , 2007, IEEE Software.

[3]  Marcelo M. Wanderley,et al.  Mapping performer parameters to synthesis engines , 2002, Organised Sound.

[4]  Perry R. Cook,et al.  Support for MIR Prototyping and Real-Time Applications in the ChucK Programming Language , 2008, ISMIR.

[5]  Jerry Alan Fails,et al.  Interactive machine learning , 2003, IUI '03.

[6]  David Wessel,et al.  Real-Time Neural Network Processing of Gestural and Acoustic Signals , 1991, ICMC.

[7]  Shlomo Dubnov,et al.  OMax brothers: a dynamic yopology of agents for improvization learning , 2006, AMCMM '06.

[8]  Norbert Schnell,et al.  MnM: a Max/MSP mapping toolbox , 2005, NIME.

[9]  J. Stephen Downie,et al.  The Music Information Retrieval Evaluation eXchange (MIREX) , 2006 .

[10]  Emmanuel Vincent,et al.  The 2005 Music Information retrieval Evaluation Exchange (MIREX 2005): Preliminary Overview , 2005, ISMIR.

[11]  Casey Reas,et al.  Processing: a learning environment for creating interactive Web graphics , 2003, SIGGRAPH '03.

[12]  Perry R. Cook,et al.  BoSSA: The Deconstructed Violin Reconstructed , 2000, ICMC.

[13]  Ian H. Witten,et al.  Data Mining: Practical Machine Learning Tools and Techniques, 3/E , 2014 .

[14]  Perry R. Cook,et al.  Don't forget the laptop: using native input capabilities for expressive musical control , 2007, NIME '07.

[15]  Matthew Wright,et al.  Open SoundControl: A New Protocol for Communicating with Sound Synthesizers , 1997, ICMC.

[16]  Ian Witten,et al.  Data Mining , 2000 .

[17]  Joseph A. Paradiso,et al.  Personalization, Expressivity, and Learnability of an Implicit Mapping Strategy for Physical Interfaces , 2005 .

[18]  Perry R. Cook,et al.  ChucK: A Concurrent, On-the-fly, Audio Programming Language , 2003, ICMC.

[19]  Geoffrey E. Hinton,et al.  Glove-Talk: a neural network interface between a data-glove and a speech synthesizer , 1993, IEEE Trans. Neural Networks.

[20]  Arshia Cont,et al.  Real-time Gesture Mapping in Pd Environment using Neural Networks , 2004, NIME.

[21]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .