Data Musicalization

Data musicalization is the process of automatically composing music based on given data as an approach to perceptualizing information artistically. The aim of data musicalization is to evoke subjective experiences in relation to the information rather than merely to convey unemotional information objectively. This article is written as a tutorial for readers interested in data musicalization. We start by providing a systematic characterization of musicalization approaches, based on their inputs, methods, and outputs. We then illustrate data musicalization techniques with examples from several applications: one that perceptualizes physical sleep data as music, several that artistically compose music inspired by the sleep data, one that musicalizes on-line chat conversations to provide a perceptualization of liveliness of a discussion, and one that uses musicalization in a gamelike mobile application that allows its users to produce music. We additionally provide a number of electronic samples of music produced by the different musicalization applications.

[1]  Daniel A. Reed,et al.  Data Sonification: Do You See What I Hear? , 1995, IEEE Softw..

[2]  Reagan Curtis Analyzing Students' Conversations In Chat Room Discussion Groups , 2003 .

[3]  Suresh K. Lodha,et al.  LISTEN: sounding uncertainty visualization , 1996, Proceedings of Seventh Annual IEEE Visualization '96.

[4]  Thomas Penzel Faculty of 1000 evaluation for Adaptive Heartbeat Modeling for Beat-to-Beat Heart Rate Measurement in Ballistocardiograms. , 2018 .

[5]  Karon E. MacLean,et al.  The perception of cross-modal simultaneity (or ``the Greenwich Observatory Problem'' revisited) , 2001 .

[6]  Hannu Toivonen,et al.  Unobtrusive online monitoring of sleep at home , 2012, 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[7]  Keith Muscutt,et al.  Composing with Algorithms: An Interview with David Cope , 2007, Computer Music Journal.

[8]  Roberto Bresin,et al.  Emotion rendering in music: Range and characteristic values of seven musical variables , 2011, Cortex.

[9]  Hannu Toivonen,et al.  Modes for Creative Human-Computer Collaboration: Alternating and Task-Divided Co-Creativity , 2016, ICCC.

[10]  Godfried T. Toussaint,et al.  The Euclidean Algorithm Generates Traditional Musical Rhythms , 2005 .

[11]  Joonas Paalasmaa,et al.  Quantifying respiratory variation with force sensor measurements , 2011, 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[12]  Dan Morris,et al.  MySong: automatic accompaniment generation for vocal melodies , 2008, CHI.

[13]  Dan Ventura,et al.  Automatic Composition from Non-musical Inspiration Sources , 2012, ICCC.

[14]  M. Quinn,et al.  RESEARCH SET TO MUSIC: THE CLIMATE SYMPHONY AND OTHER SONIFICATIONS OF ICE CORE, RADAR, DNA, SEISMIC AND SOLAR WIND DATA , 2001 .

[15]  Geraint A. Wiggins,et al.  AI Methods for Algorithmic Composition: A Survey, a Critical View and Future Prospects , 1999 .

[16]  Jason Freeman,et al.  Data-to-music API: Real-time data-agnostic sonification with musical structure models , 2015, ICAD.

[17]  Hannu Toivonen,et al.  Sleep Musicalization: Automatic Music Composition from Sleep Measurements , 2012, IDA.

[18]  A. Chesson,et al.  The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology, and Techinical Specifications , 2007 .

[19]  Tony R. Martinez,et al.  Computational Modeling of Emotional Content in Music , 2010 .

[20]  WuJa-Ling,et al.  Audio Musical Dice Game , 2015 .

[21]  Paul Vickers,et al.  Sonification Abstraite/Sonification Concrète: An 'Aesthetic Perspective Space' for Classifying Auditory Displays in the Ars Musica Domain , 2006, ArXiv.

[22]  Arvid Kappas,et al.  Sentiment in short strength detection informal text , 2010, J. Assoc. Inf. Sci. Technol..

[23]  Luke Dahl,et al.  Evolving The Mobile Phone Orchestra , 2010, NIME.

[24]  P. Juslin Communicating emotion in music performance: A review and a theoretical framework , 2001 .

[25]  Georg Essl,et al.  Do Mobile phones Dream of Electric Orchestras? , 2009, ICMC.

[26]  Stephen Barrass,et al.  Listening to the Mind Listening: An Analysis of Sonification Reviews, Designs and Correspondences , 2006, Leonardo Music Journal.

[27]  Luciano Gamberini,et al.  Symbiotic Interaction: A Critical Definition and Comparison to other Human-Computer Paradigms , 2014, Symbiotic.

[28]  Alireza Sahami Shirazi,et al.  Creating Meaningful Melodies from Text Messages , 2010, NIME.

[29]  Hans Van Raaij Listening to the Mind Listening , 2004, ICAD.

[30]  Thomas Hermann,et al.  Evaluation of Auditory Display , 2011 .

[31]  Ewan Hill,et al.  ATLAS data sonification : a new interface for musical expression , 2017 .

[32]  Jyh-Shing Roger Jang,et al.  Audio Musical Dice Game , 2015, ACM Trans. Multim. Comput. Commun. Appl..

[33]  Luca Turchet,et al.  Effects of Interactive Sonification on Emotionally Expressive Walking Styles , 2015, IEEE Transactions on Affective Computing.

[34]  Tony Veale,et al.  Converging on the Divergent: The History (and Future) of the International Joint Workshops in Computational Creativity , 2009, AI Mag..

[35]  John T. Stasko,et al.  A taxonomy of ambient information systems: four patterns of design , 2006, AVI '06.

[36]  John H. Flowers,et al.  Data sonification from the desktop: Should sound be part of standard data analysis software? , 2005, TAP.

[37]  Curtis Roads,et al.  The Computer Music Tutorial , 1996 .

[38]  Simon Colton,et al.  Computational Creativity Theory: The FACE and IDEA Descriptive Models , 2011, ICCC.

[39]  Mark H. Hansen,et al.  LISTENING POST: GIVING VOICE TO ONLINE COMMUNICATION , 2002 .

[40]  Simon Colton,et al.  Computational Creativity: The Final Frontier? , 2012, ECAI.

[41]  A. Chesson,et al.  The American Academy of Sleep Medicine (AASM) Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications , 2007 .

[42]  Paul Vickers,et al.  Musical program auralization: Empirical studies , 2005, TAP.

[43]  Mark H. Hansen,et al.  BABBLE ONLINE: APPLYING STATISTICS AND DESIGN TO SONIFY THE INTERNET , 2001 .

[44]  Paul Vickers,et al.  CAITLIN: a musical program auralisation tool to assist novice programmers with debugging , 1996 .

[45]  Michael Rohs,et al.  Interactivity for Mobile Music-Making , 2009, Organised Sound.

[46]  Tony R. Martinez,et al.  Automatic Generation of Music for Inducing Emotive Response , 2010, ICCC.

[47]  Jonathan Berger,et al.  Smule = Sonic Media: An Intersection of the Mobile, Musical, and Social , 2009, ICMC.

[48]  Suresh K. Lodha,et al.  Listen: A data sonification toolkit , 1996 .

[49]  P. Juslin,et al.  Toward a computational model of expression in music performance: The GERM model , 2001 .

[50]  Wendy A. Kellogg,et al.  The adoption and use of BABBLE: A field study of chat in the workplace , 1999, ECSCW.

[51]  Dan Ventura,et al.  Musical Motif Discovery in Non-musical Media , 2014, ICCC.

[52]  Luca Turchet,et al.  Emotion Rendering in Auditory Simulations of Imagined Walking Styles , 2017, IEEE Transactions on Affective Computing.

[53]  James W. Pennebaker,et al.  Linguistic Inquiry and Word Count (LIWC2007) , 2007 .

[54]  Justin Grimmer,et al.  Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts , 2013, Political Analysis.

[55]  Jose D. Fernández,et al.  AI Methods in Algorithmic Composition: A Comprehensive Survey , 2013, J. Artif. Intell. Res..