User Requirements for Live Sound Visualization System Using Multitrack Audio

In this paper, we identify design requirements for a screen-based system that enables live sound visualization using multitrack audio. Our mixed methodology is grounded in user-centered design and involved a review of the literature to assess the state-of-the-art of Video Jockeying (VJing), and two online surveys to canvas practices within the audiovisual community and gain practical and aspirational awareness on the subject. We review ten studies about VJ practice and culture and human computer interaction topics within live performance. Results from the first survey, completed by 22 participants, were analysed to identify general practices, mapping preferences, and impressions about multitrack audio and audio-content feature extraction. A second complementary survey was designed to probe about specific implications of performing with a system that facilitates live visual performance using multitrack audio. Analyses from 29 participants' self-reports highlight that the creation of audiovisual content is a multivariate and subjective process and help define where multitrack audio, audio-content extraction, and live mapping could fit within. We analyze the findings and discuss how they can inform a design for our system.

[1]  Batya Friedman,et al.  Value-sensitive design , 1996, INTR.

[2]  Oskar Juhlin,et al.  Mobile collaborative live video mixing , 2008, Mobile HCI.

[3]  Victoria Hoban,et al.  The Reflective Practitioner , 2013 .

[4]  Anthony Steed,et al.  Mutable mapping: gradual re-routing of OSC control data as a form of artistic performance , 2009, Advances in Computer Entertainment Technology.

[5]  William W. Gaver,et al.  Designing for homo ludens , 2002 .

[6]  Atau Tanaka,et al.  User-centered design of a tool for interactive computer-generated audiovisuals , 2014 .

[7]  Ernest A. Edmonds,et al.  What is generative art? , 2009, Digit. Creativity.

[8]  D. Schoen The Reflective Practitioner , 1983 .

[9]  Nick Bryan-Kinns,et al.  FEATUR.UX: An approach to leveraging multitrack information for artistic music visualization , 2016 .

[10]  Mathieu Barthet,et al.  FEATUR.UX.AV: A Live Sound Visualization System Using Multitrack Audio , 2017, Audio Mostly Conference.

[11]  Johnny Saldaña,et al.  The Coding Manual for Qualitative Researchers , 2009 .

[12]  William W. Gaver Technology affordances , 1991, CHI.

[13]  Stephan Wensveen,et al.  From perception to experience, from affordances to irresistibles , 2003, DPPI '03.

[14]  Patrick Olivier,et al.  A VJ centered exploration of expressive interaction , 2011, CHI.

[15]  Patrick Olivier,et al.  Configuring participation: on how we involve people in design , 2013, CHI.

[16]  William W. Gaver Oh what a tangled web we weave: metaphor and mapping in graphical interfaces , 1995, CHI 95 Conference Companion.

[17]  John Zimmerman,et al.  The Role of Design Artifacts in Design Theory Construction , 2008 .

[18]  Nick Collins,et al.  A Protocol for Audiovisual Cutting , 2003, ICMC.

[19]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[20]  Kevin A Hallgren,et al.  Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. , 2012, Tutorials in quantitative methods for psychology.

[21]  Karon E. MacLean,et al.  Manipulating music: multimodal interaction for DJs , 2004, CHI '04.

[22]  Steve Benford,et al.  Ambiguity as a resource for design , 2003, CHI '03.

[23]  H. Rittel,et al.  Dilemmas in a general theory of planning , 1973 .