The affective remixer: personalized music arranging

This paper describes a real-time music-arranging system that reacts to immediate affective cues from a listener. Data was collected on the potential of certain musical dimensions to elicit change in a listener's affective state using sound files created explicitly for the experiment through composition/production, segmentation, and re-assembly of music along these dimensions. Based on listener data, a probabilistic state transition model was developed to infer the listener's current affective state. A second model was made that would select music segments and re-arrange ('re-mix') them to induce a target affective state. We propose that this approach provides a new perspective for characterizing musical preference.

[1]  J. Singer,et al.  Cognitive, social, and physiological determinants of emotional state. , 1962 .

[2]  J. Singer,et al.  Cognitive, social, and physiological determinants of emotional state. , 1962, Psychological review.

[3]  D. Berlyne,et al.  Aesthetics and psychobiology , 1975 .

[4]  J. Russell A circumplex model of affect. , 1980 .

[5]  Rosalind W. Picard Affective Computing , 1997 .

[6]  A. North,et al.  Liking, arousal potential, and the emotions expressed by music. , 1997, Scandinavian journal of psychology.

[7]  P. Juslin Perceived Emotional Expression in Synthesized Performances of a Short Melody: Capturing the Listener's Judgment Policy , 1997 .

[8]  Jennifer Healey,et al.  A New Affect-Perceiving Interface and Its Application to Personalized Music Selection , 1998 .

[9]  Anders Friberg,et al.  Emotional Coloring of Computer-Controlled Music Performances , 2000, Computer Music Journal.

[10]  Rosalind W. Picard The Galvactivator: A glove that senses and communicates skin conductivity , 2001 .

[11]  A. Gabrielsson,et al.  The influence of musical structure on emotional expression. , 2001 .

[12]  Emery Schubert Research in expressing continuous emotional response to music as a function of its psychoacoustic parameters : Current and future directions , 2004 .

[13]  Elisabeth André,et al.  A generate and sense approach to automated music composition , 2004, IUI '04.

[14]  Xavier Serra,et al.  SIMAC: semantic interaction with music audio contents , 2005 .