Jamming with a Smart Mandolin and Freesound-based Accompaniment

This paper presents an Internet of Musical Things ecosystem involving musicians and audiences interacting with a smart mandolin, smartphones, and the Audio Commons online repository Freesound. The ecosystem has been devised to support performer-instrument and performer-audience interactions through the generation of musical accompaniments exploiting crowd-sourced sounds. We present two use cases investigating how audio content retrieved from Freesound can be leveraged by performers or audiences to produce accompanying soundtracks for music performance with a smart mandolin. In the performer-instrument interaction use case, the performer can select content to be retrieved prior to performing through a set of keywords and structure it in order to create the desired accompaniment. In the performer-audience interaction use case, a group of audience members participates in the music creation by selecting and arranging Freesound audio content to create an accompaniment collaboratively. We discuss the advantages and limitations of the system with regard to music making and audience participation, along with its implications and challenges.

[1]  Youngmoo E. Kim,et al.  Augmentation of Acoustic Drums using Electromagnetic Actuation and Wireless Control , 2018 .

[2]  Luca Turchet,et al.  Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned , 2018, Audio Mostly Conference.

[3]  Mathieu Barthet,et al.  Smart Musical Instruments , 2019, Foundations in Sound Design for Embedded Media.

[4]  Marcelo M. Wanderley,et al.  New Digital Musical Instruments: Control And Interaction Beyond the Keyboard (Computer Music and Digital Audio Series) , 2006 .

[5]  Yongmeng Wu,et al.  Open Symphony: Creative Participation for Audiences of Live Music Performances , 2017, IEEE MultiMedia.

[6]  Antonio Iera,et al.  The Internet of Things: A survey , 2010, Comput. Networks.

[7]  Andrew P. McPherson,et al.  Co-design of a Smart Cajón , 2018 .

[8]  Xavier Serra,et al.  Freesound 2: An Improved Platform for Sharing Audio Clips , 2011 .

[9]  Jason Freeman,et al.  Large Audience Participation, Technology, and Orchestral Performance , 2005, ICMC.

[10]  Jason Freeman,et al.  massMobile -an Audience Participation Framework , 2012, NIME.

[11]  Carlo Fischione,et al.  Towards the Internet of Musical Things , 2017 .

[12]  Imrich Chlamtac,et al.  Internet of things: Vision, applications and research challenges , 2012, Ad Hoc Networks.

[13]  Carlo Fischione,et al.  Internet of Musical Things: Vision and Challenges , 2018, IEEE Access.

[14]  Austin Henderson,et al.  Interaction design: beyond human-computer interaction , 2002, UBIQ.

[15]  Akira Fukuda,et al.  AES White Paper 1001: Networking Audio and Music Using Internet2 and Next-Generation Internet Capabilities , 1999 .

[16]  Chris Chafe,et al.  JackTrip: Under the Hood of an Engine for Network Audio , 2010, ICMC.

[17]  Mathieu Barthet,et al.  Playsound.space: Inclusive Free Music Improvisations Using Audio Commons , 2018, NIME.

[18]  Chiara Rossitto,et al.  Understanding audience participation in an interactive theater performance , 2014, NordiCHI.

[19]  Andrew P. McPherson,et al.  Action-Sound Latency: Are Our Tools Fast Enough? , 2016, NIME.

[20]  Nick Bryan-Kinns,et al.  Designing collaborative musical experiences for broad audiences , 2013, Creativity & Cognition.

[21]  Ann Light,et al.  Designing Connected Products: UX for the Consumer Internet of Things , 2015 .

[22]  Mark D. Plumbley,et al.  Audio Commons: bringing Creative Commons audio content to the creative industries , 2016 .

[23]  J. Stephen Downie,et al.  Music information retrieval , 2005, Annu. Rev. Inf. Sci. Technol..

[24]  Georg Essl,et al.  Understanding Cloud Support for the Audience Participation Concert Performance of Crowd in C[loud] , 2016, NIME.

[25]  Luca Turchet,et al.  Embodied Interactions with E-Textiles and the Internet of Sounds for Performing Arts , 2018, TEI.

[26]  R. Kronland-Martinet,et al.  Acoustical Correlates of Timbre and Expressiveness in Clarinet Performance , 2010 .

[27]  Mathieu Barthet,et al.  Open band: Audience Creative Participation Using Web Audio Synthesis , 2017 .

[28]  György Fazekas,et al.  Novel Methods in Facilitating Audience and Performer Interaction Using the Mood Conductor Framework , 2013, CMMR.

[29]  Mathieu Barthet,et al.  Datascaping: Data Sonification as a Narrative Device in Soundscape Composition , 2017, Audio Mostly Conference.

[30]  Yvonne Rogers,et al.  Interaction Design: Beyond Human-Computer Interaction , 2002 .

[31]  Juan Pablo Bello,et al.  PySOX: Leveraging the Audio Signal Processing Power of SOX in Python , 2016 .

[32]  Lie Lu,et al.  Audio textures: theory and applications , 2004, IEEE Transactions on Speech and Audio Processing.

[33]  Andrew P. McPherson,et al.  An Environment for Submillisecond-Latency Audio and Sensor Processing on BeagleBone Black , 2015 .

[34]  Alexandre Clément,et al.  Bridging the gap between performers and the audience using networked smartphones : the a . bel system , 2016 .

[35]  Peter Purgathofer,et al.  Playful Technology-Mediated Audience Participation in a Live Music Event , 2017, CHI PLAY.

[36]  Victor Lazzarini,et al.  Ecologically Grounded Creative Practices in Ubiquitous Music , 2017, Organised Sound.

[37]  Norbert Schnell,et al.  Synchronisation for Distributed Audio Rendering over Heterogeneous Devices, in HTML5 , 2016 .

[38]  Luca Turchet,et al.  Real-Time Hit Classification in a Smart Cajón , 2018, Front. ICT.

[39]  Richard Kronland-Martinet,et al.  Analysis-By-Synthesis of Timbre, Timing, and Dynamics in Expressive Clarinet Performance , 2011 .

[40]  Andrew R. Nix,et al.  Making the Most of Wi-Fi: Optimisations for Robust Wireless Live Music Performance , 2014, NIME.

[41]  Joseph T. Chung,et al.  Hyperinstruments: Musically Intelligent and Interactive Performance and Creativity Systems , 1989, ICMC.

[42]  Gil Weinberg,et al.  Interconnected Musical Networks: Toward a Theoretical Framework , 2005, Computer Music Journal.

[43]  Atau Tanaka Mobile Music Making , 2004, NIME.

[44]  Miller Puckette,et al.  Real-time audio analysis tools for Pd and MSP , 1998, ICMC.

[45]  lvaro Barbosa,et al.  Displaced Soundscapes: A Survey of Network Systems for Music and Sonic Art Creation , 2003, Leonardo Music Journal.

[46]  György Fazekas,et al.  Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases , 2018, NIME.

[47]  Sang Won Lee,et al.  Mobile Devices as Musical Instruments - State of the Art and Future Prospects , 2017, CMMR.

[48]  Andrew P. McPherson Buttons, Handles, and Keys: Advances in Continuous-Control Keyboard Instruments , 2015, Computer Music Journal.

[49]  Luca Turchet,et al.  Smart Musical Instruments: Vision, Design Principles, and Future Directions , 2019, IEEE Access.

[50]  Yongmeng Wu,et al.  A Web Application for Audience Participation in Live Music Performance: The Open Symphony Use Case , 2016, NIME.

[51]  Edgar Berdahl,et al.  Advancements in Actuated Musical Instruments , 2011, Organised Sound.

[52]  Eleonora Borgia,et al.  The Internet of Things vision: Key features, applications and open issues , 2014, Comput. Commun..

[53]  Augusto Sarti,et al.  An Overview on Networked Music Performance Technologies , 2016, IEEE Access.