Computational Modeling of Induced Emotion Using GEMS

Most researchers in the automatic music emotion recognition field focus on the two-dimensional valence and arousal model. This model though does not account for the whole diversity of emotions expressible through music. Moreover, in many cases it might be important to model induced (felt) emotion, rather than perceived emotion. In this paper we explore a multidimensional emotional space, the Geneva Emotional Music Scales (GEMS), which addresses these two issues. We collected the data for our study using a game with a purpose. We exploit a comprehensive set of features from several state-of-the-art toolboxes and propose a new set of harmonically motivated features. The performance of these feature sets is compared. Additionally, we use expert human annotations to explore the dependency between musicologically meaningful characteristics of music and emotional categories of GEMS, demonstrating the need for algorithms that can better approximate human perception.

[1]  Björn W. Schuller,et al.  Determination of Nonprototypical Valence and Arousal in Popular Music: Features and Performances , 2010, EURASIP J. Audio Speech Music. Process..

[2]  Yi-Hsuan Yang,et al.  Automatic chord recognition for music classification and retrieval , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[3]  Didier Grandjean,et al.  Towards a Dynamic Approach to the Study of Emotions Expressed by Music , 2011, INTETAIN.

[4]  K. Scherer,et al.  The World of Emotions is not Two-Dimensional , 2007, Psychological science.

[5]  杨德顺,et al.  Music Emotion Regression Based on Multi-modal Features , 2012 .

[6]  K. Scherer,et al.  Emotions evoked by the sound of music: characterization, classification, and measurement. , 2008, Emotion.

[7]  Remco C. Veltkamp,et al.  Designing Games with a Purpose for Data Collection in Music Research. Emotify and Hooked: Two Case Studies , 2013, GALA.

[8]  C. Harte,et al.  Detecting harmonic change in musical audio , 2006, AMCMM '06.

[9]  K. Scherer,et al.  On the Acoustics of Emotion in Audio: What Speech, Music, and Sound have in Common , 2013, Front. Psychol..

[10]  L. Wedin,et al.  A multidimensional study of perceptual-emotional qualities in music. , 1972, Scandinavian journal of psychology.

[11]  Ichiro Fujinaga,et al.  Automatic Genre Classification Using Large High-Level Musical Feature Sets , 2004, ISMIR.

[12]  Yi-Hsuan Yang,et al.  Machine Recognition of Music Emotion: A Review , 2012, TIST.

[13]  Petri Toiviainen,et al.  Exploring relationships between audio features and emotion in music , 2009 .

[14]  Tuomas Eerola,et al.  DOMAIN-SPECIFIC OR NOT? THE APPLICABILITY OF DIFFERENT EMOTION MODELS IN THE ASSESSMENT OF MUSIC-INDUCED EMOTIONS , 2008 .

[15]  Yi-Hsuan Yang,et al.  A Regression Approach to Music Emotion Recognition , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[16]  Remco C. Veltkamp,et al.  Collecting annotations for induced musical emotion via online game with a purpose emotify , 2014 .

[17]  Doris Eckstein,et al.  Musical Chords as Affective Priming Context in a Word-Evaluation Task , 2003 .