Originally formed under the auspices of music information retrieval, the field, which might now be better described as music informatics research (MIR), continues to evolve in an effort to better understand and manipulate information related to the phenomena we know as music. Being one of the most readily available data sources, much effort has been invested in the development of systems to extract high-level information from music signals, referred to here as content-based methods. However, once this information is obtained, what else could it be used for beyond retrieval? To these ends, a breakout session was convened to assess the current research climate and state of the art in creative or constructive topics in music informatics research. In hindsight, it is understandable why the earliest efforts in content-based MIR placed a heavy emphasis on retrieval-centric problems. Within a media ecosystem, there are three fundamental roles an agent may assume —creator, distributor, and consumer— and the degree to which one partakes in each exists on a continuum. Importantly, the roles most affected by the mass embrace of the Internet and the advent of personal media players were those of distributor and consumer. The combination of these two spawned a new challenge never-before faced in the music industry: how does one make sense of more music than could ever be listened to in a single lifetime? As a result, the research trajectory of the field was significantly shaped by technological advances and fueled by user expectations of the 20 century. While creativity-oriented music technology is not a new research topic, there is mounting interest to coalesce a more focused effort on creative topics and applications in MIR. Though individual rationales vary widely, there are several notable reasons why it is critical to start this conversation now. First, society is continuing to progress toward the reality where anyone, anywhere can create multimedia content, referred to at present as user-generated content. Not only can music now be recorded at scale, but it is also no longer possible to control, a responsibility formerly upheld by major record labels. Second, “digital natives,”
[1]
Bryan Pardo,et al.
Building a personalized audio equalizer interface with transfer learning and active learning
,
2012,
MIRUM '12.
[2]
Tom Collins,et al.
Improved methods for pattern discovery in music, with applications in automated stylistic composition
,
2011
.
[3]
Joshua D. Reiss,et al.
Intelligent systems for mixing multichannel audio
,
2011,
2011 17th International Conference on Digital Signal Processing (DSP).
[4]
Geraint A. Wiggins,et al.
A preliminary framework for description, analysis and comparison of creative systems
,
2006,
Knowl. Based Syst..
[5]
Bryan Pardo,et al.
Social-EQ: Crowdsourcing an Equalization Descriptor Map
,
2013,
ISMIR.
[6]
François Pachet,et al.
A Comprehensive Online Database of Machine-Readable Lead-Sheets for Jazz Standards
,
2013,
ISMIR.
[7]
A. Britto,et al.
UvA-DARE (Digital Academic Repository) Hooked: A Game for Discovering What Makes Music Catchy HOOKED: A GAME FOR DISCOVERING WHAT MAKES MUSIC CATCHY
,
2013
.
[8]
Cory McKay,et al.
JProductionCritic: An Educational Tool for Detecting Technical Errors in Audio Mixes
,
2013,
ISMIR.
[9]
Matthew E. P. Davies,et al.
AutoMashUpper: An Automatic Multi-Song Mashup System
,
2013,
ISMIR.
[10]
Jeffrey J. Scott,et al.
AUTOMATIC MULTI-TRACK MIXING USING LINEAR DYNAMICAL SYSTEMS
,
2011
.
[11]
Geraint A. Wiggins,et al.
Evaluating Cognitive Models of Musical Composition
,
2007
.
[12]
Roger B. Dannenberg,et al.
TagATune: A Game for Music and Sound Annotation
,
2007,
ISMIR.
[13]
D. Moffat,et al.
An investigation into people’s bias against computational creativity in music composition
,
2006
.