Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases

The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that can be approached using music information retrieval (MIR). In this paper, we explore the use of MIR to retrieve and repurpose sounds in musical live coding. We present a live coding system built on SuperCollider enabling the use of audio content from online Creative Commons (CC) sound databases such as Freesound or personal sound databases. The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database or a crowdsourced database and reported its potential in facilitating tailorability of the tool to their own creative workflows.

[1]  E. LESTER SMITH,et al.  AND OTHERS , 2005 .

[2]  Mark B. Sandler,et al.  Automatic Tagging Using Deep Convolutional Neural Networks , 2016, ISMIR.

[3]  Diemo Schwarz,et al.  REAL-TIME CORPUS-BASED CONCATENATIVE SYNTHESIS WITH CATART , 2006 .

[4]  Jason Freeman,et al.  Collaborative Textual Improvisation in a Laptop Ensemble , 2011, Computer Music Journal.

[5]  Nick Collins The BBCut Library , 2002, ICMC.

[6]  Xavier Serra,et al.  Essentia: An Audio Analysis Library for Music Information Retrieval , 2013, ISMIR.

[7]  Xavier Serra,et al.  Sound Sharing and Retrieval , 2018 .

[8]  F. Pachet,et al.  MUSICAL MOSAICING , 2001 .

[9]  D. Schwarz,et al.  Corpus-Based Concatenative Synthesis , 2007, IEEE Signal Processing Magazine.

[10]  Xavier Serra,et al.  Music performance by discovering community loops , 2015 .

[11]  Thor Magnusson,et al.  Confessions of a Live Coder , 2011, ICMC.

[12]  Graham Wakefield,et al.  Gibberwocky: new live-coding instruments for musical performance , 2017, NIME.

[13]  Mark D. Plumbley,et al.  Audio Commons: bringing Creative Commons audio content to the creative industries , 2016 .

[14]  Xavier Serra,et al.  Freesound Radio: supporting music creation by exploration of a sound database , 2009 .

[15]  James McCartney,et al.  Rethinking the Computer Music Language: SuperCollider , 2002, Computer Music Journal.

[16]  Marcelo M. Wanderley,et al.  What does 'evaluation' mean for the nime community? , 2015, NIME.

[17]  Jason Freeman,et al.  Data-Driven Live Coding with DataToMusic API , 2016 .

[18]  P. Cano,et al.  Automatic sound annotation , 2004, Proceedings of the 2004 14th IEEE Signal Processing Society Workshop Machine Learning for Signal Processing, 2004..

[19]  Hernán Ordiales,et al.  Sound recycling from public databases: Another BigData approach to sound collections , 2017, Audio Mostly Conference.

[20]  Alexander Lerch,et al.  An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics , 2012 .

[21]  Sang Won Lee,et al.  Live Coding YouTube , 2018, NIME.

[22]  Xavier Serra,et al.  Freesound 2: An Improved Platform for Sharing Audio Clips , 2011 .

[23]  JoAnn Kuchera-Morin,et al.  Gibber: Live coding audio in the Browser , 2012, ICMC.