Multilabel Automated Recognition of Emotions Induced Through Music

Achieving advancements in automatic recognition of emotions that music can induce require considering multiplicity and simultaneity of emotions. Comparison of different machine learning algorithms performing multilabel and multiclass classification is the core of our work. The study analyzes the implementation of the Geneva Emotional Music Scale 9 in the Emotify music dataset and the data distribution. The research goal is the identification of best methods towards the definition of the audio component of a new a new multimodal dataset for music emotion recognition.

[1]  Mert Bay,et al.  The 2007 MIREX Audio Mood Classification Task: Lessons Learned , 2008, ISMIR.

[2]  Hugo Fastl,et al.  Psychoacoustics: Facts and Models , 1990 .

[3]  Ichiro Fujinaga,et al.  jAudio: An Feature Extraction Library , 2005, ISMIR.

[4]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[5]  J. Russell,et al.  A 12-Point Circumplex Structure of Core Affect. , 2011, Emotion.

[6]  Jeffrey J. Scott,et al.  MUSIC EMOTION RECOGNITION: A STATE OF THE ART REVIEW , 2010 .

[7]  David Huron,et al.  Exploring the Musical Mind: Cognition, Emotion, Ability, Function , 2005 .

[8]  Jakub Simko,et al.  State-of-the-Art: Semantics Acquisition Games , 2014 .

[9]  Densil Cabrera,et al.  ' PSYSOUND' : A COMPUTER PROGRAM FOR PSYCHOACOUSTICAL ANALYSIS , 1999 .

[10]  Petri Toiviainen,et al.  Exploring relationships between audio features and emotion in music , 2009 .

[11]  Petri Toiviainen,et al.  A Matlab Toolbox for Music Information Retrieval , 2007, GfKl.

[12]  Makoto Iwanaga,et al.  Two types of peak emotional responses to music: The psychophysiology of chills and tears , 2017, Scientific Reports.

[13]  K. Scherer,et al.  Emotions evoked by the sound of music: characterization, classification, and measurement. , 2008, Emotion.

[14]  Yi Lu Murphey,et al.  Multi-class pattern classification using neural networks , 2007, Pattern Recognit..

[15]  Elia Bruni,et al.  Multimodal Distributional Semantics , 2014, J. Artif. Intell. Res..

[16]  Daniel C. Johnson,et al.  Exploring the musical mind : cognition, emotion, ability, function , 2006 .

[17]  Patrik Vuilleumier,et al.  Temporal dynamics of musical emotions examined through intersubject synchrony of brain activity. , 2015, Social cognitive and affective neuroscience.

[18]  Stephen R. Garner,et al.  WEKA: The Waikato Environment for Knowledge Analysis , 1996 .

[19]  Lisa Pearl,et al.  Can you read my mindprint?: Automatically identifying mental states from language text using deeper linguistic features , 2014 .

[20]  Gert R. G. Lanckriet,et al.  Combining audio content and social context for semantic music discovery , 2009, SIGIR.

[21]  Pernille Hemmer,et al.  The Wisdom of Crowds in the Recollection of Order Information , 2009, NIPS.

[22]  Rainer Reisenzein,et al.  Experiencing activation: energetic arousal and tense arousal are not mixtures of valence and activation. , 2002 .

[23]  Remco C. Veltkamp,et al.  Studying emotion induced by music through a crowdsourcing game , 2016, Inf. Process. Manag..

[24]  Remco C. Veltkamp,et al.  Collecting annotations for induced musical emotion via online game with a purpose emotify , 2014 .

[25]  Rainer Reisenzein,et al.  Experiencing activation: energetic arousal and tense arousal are not mixtures of valence and activation. , 2002, Emotion.

[26]  Tuomas Eerola,et al.  Music and Its Inductive Power: A Psychobiological and Evolutionary Approach to Musical Emotions , 2017, Frontiers in Psychology.

[27]  Lior Rokach,et al.  Data Mining And Knowledge Discovery Handbook , 2005 .

[28]  Ethem Alpaydin,et al.  Introduction to machine learning , 2004, Adaptive computation and machine learning.

[29]  Yi-Hsuan Yang,et al.  A Regression Approach to Music Emotion Recognition , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[30]  Remco C. Veltkamp,et al.  Computational Modeling of Induced Emotion Using GEMS , 2014, ISMIR.

[31]  Michael D. Lee,et al.  Inferring Expertise in Knowledge and Prediction Ranking Tasks , 2012, Top. Cogn. Sci..

[32]  György Fazekas,et al.  Music Emotion Recognition: From Content- to Context-Based Models , 2012, CMMR.

[33]  Björn W. Schuller,et al.  Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[34]  Yang Li,et al.  Recognizing emotions in speech using short-term and long-term features , 1998, ICSLP.

[35]  L. Cronbach Coefficient alpha and the internal structure of tests , 1951 .

[36]  Gert R. G. Lanckriet,et al.  Semantic Annotation and Retrieval of Music and Sound Effects , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[37]  Yi Lin,et al.  Exploration of Music Emotion Recognition Based on MIDI , 2013, ISMIR.

[38]  Andreas F. Ehmann,et al.  Lyric Text Mining in Music Mood Classification , 2009, ISMIR.

[39]  Chung-Hsien Wu,et al.  Emotion recognition using acoustic features and textual content , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[40]  Brandon M. Turner,et al.  A Wisdom of the Crowd Approach to Forecasting , 2011 .