From Sounds to Music and Emotions

The constant growth of online music dataset and applica- tions has required advances in MIR Research. Music genres and anno- tated mood have received much attention in the last decades as descrip- tors of content-based systems. However, their inherent relationship is rarely explored in the literature. Here, we investigate whether or not the presence of tonal and rhythmic motifs in the melody can be used for establishing a relationship between genres and subjective aspects such as the mood,dynamism and emotion. Our approach uses symbolic rep- resentation of music and is applied to eight dierent genres.

[1]  Sung Min Kim,et al.  Smoodi: Mood-based music recommendation player , 2011, 2011 IEEE International Conference on Multimedia and Expo.

[2]  David Huron Perceptual and Cognitive Applications in Music Information Retrieval , 2000, ISMIR.

[3]  Tijl De Bie,et al.  Mining the Correlation between Lyrical and Audio Features and the Emergence of Mood , 2011, ISMIR.

[4]  Elaine Chew,et al.  Quantitative and visual analysis of the impact of music on perceived emotion of film , 2007, CIE.

[5]  François Pachet,et al.  Improving Timbre Similarity : How high’s the sky ? , 2004 .

[6]  Mitsunori Ogihara,et al.  Mood and Emotional Classification , 2011 .

[7]  Petri Toiviainen,et al.  MIDI toolbox : MATLAB tools for music research , 2004 .

[8]  J. Russell A circumplex model of affect. , 1980 .

[9]  Kornel Laskowski,et al.  Emotion recognition in spontaneous speech using GMMs , 2006, INTERSPEECH.

[10]  Barbara Tillmann,et al.  Judging familiarity and emotion from very brief musical excerpts , 2010, Psychonomic bulletin & review.

[11]  Achim Zeileis,et al.  Conditional variable importance for random forests , 2008, BMC Bioinformatics.

[12]  Emiru Tsunoo,et al.  Music mood classification by rhythm and bass-line unit pattern analysis , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[13]  Youngmoo E. Kim,et al.  Exploring automatic music annotation with "acoustically-objective" tags , 2010, MIR '10.

[14]  Leonard B. Meyer Emotion and Meaning in Music , 1957 .

[15]  Deshun Yang,et al.  Multi-Modal Music Mood Classification Using Co-Training , 2010, 2010 International Conference on Computational Intelligence and Software Engineering.

[16]  N. Scaringella,et al.  Automatic genre classification of music content: a survey , 2006, IEEE Signal Process. Mag..

[17]  Youngmoo E. Kim,et al.  Prediction of Time-Varying Musical Mood Distributions Using Kalman Filtering , 2010, 2010 Ninth International Conference on Machine Learning and Applications.

[18]  Barry Vercoe,et al.  ON THE PERCEIVED COMPLEXITY OF SHORT MUSICAL SEGMENTS , 2000 .

[19]  Beth Logan,et al.  A music similarity function based on signal analysis , 2001, IEEE International Conference on Multimedia and Expo, 2001. ICME 2001..

[20]  Ei Ei Pe Myint,et al.  An approach for mulit-label music mood classification , 2010, 2010 2nd International Conference on Signal Processing Systems.

[21]  K. Gfeller,et al.  Effects of training on timbre recognition and appraisal by postlingually deafened cochlear implant recipients. , 2002, Journal of the American Academy of Audiology.

[22]  Emery Schubert Update of the Hevner Adjective Checklist , 2003, Perceptual and motor skills.

[23]  Antoine J. Shahin,et al.  Music training leads to the development of timbre-specific gamma band activity , 2008, NeuroImage.

[24]  Björn W. Schuller,et al.  Multi-Modal Non-Prototypical Music Mood Analysis in Continuous Space: Reliability and Performances , 2011, ISMIR.

[25]  Yi-Hsuan Yang,et al.  Prediction of the Distribution of Perceived Music Emotions Using Discrete Samples , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[26]  S. Gosling,et al.  PERSONALITY PROCESSES AND INDIVIDUAL DIFFERENCES The Do Re Mi’s of Everyday Life: The Structure and Personality Correlates of Music Preferences , 2003 .

[27]  Gert R. G. Lanckriet,et al.  Towards musical query-by-semantic-description using the CAL500 data set , 2007, SIGIR.

[28]  Robert J Zatorre,et al.  Deficits of musical timbre perception after unilateral temporal-lobe lesion revealed with multidimensional scaling. , 2002, Brain : a journal of neurology.

[29]  J. Stephen Downie,et al.  Exploring Mood Metadata: Relationships with Genre, Artist and Usage Metadata , 2007, ISMIR.

[30]  William L. Martens,et al.  Timbre of nonlinear distortion effects: Perceptual attributes beyond sharpness? , 2005 .

[31]  Yi-Hsuan Yang,et al.  Ranking-Based Emotion Recognition for Music Organization and Retrieval , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[32]  John Z. Zhang,et al.  An Empirical Study of Multi-Label Classifiers for Music Tag Annotation , 2011, ISMIR.

[33]  Youngmoo E. Kim,et al.  Feature selection for content-based, time-varying musical emotion regression , 2010, MIR '10.

[34]  J. Stephen Downie,et al.  Survey Of Music Information Needs, Uses, And Seeking Behaviours: Preliminary Findings , 2004, ISMIR.

[35]  Stephen McAdams,et al.  The Perception of Musical Timbre , 2008 .

[36]  Richard Kronland-Martinet,et al.  Analysis-By-Synthesis of Timbre, Timing, and Dynamics in Expressive Clarinet Performance , 2011 .

[37]  K. Scherer,et al.  Emotions evoked by the sound of music: characterization, classification, and measurement. , 2008, Emotion.

[38]  K. Hornik,et al.  Model-Based Recursive Partitioning , 2008 .

[39]  Marc Leman,et al.  A User-Oriented Approach to Music Information Retrieval , 2006, Content-Based Retrieval.

[40]  Jun Wang,et al.  Enriching music mood annotation by semantic association reasoning , 2010, 2010 IEEE International Conference on Multimedia and Expo.

[41]  S. Frith,et al.  Art Into Pop , 1987 .

[42]  Emery Schubert,et al.  Continuous Self-Report Methods , 1993 .

[43]  Lie Lu,et al.  Automatic mood detection and tracking of music audio signals , 2006, IEEE Transactions on Audio, Speech, and Language Processing.

[44]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[45]  Gert R. G. Lanckriet,et al.  Modeling Dynamic Patterns for Emotional Content in Music , 2011, ISMIR.

[46]  Yi-Hsuan Yang,et al.  Exploiting genre for music emotion classification , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[47]  Jonathan Foote,et al.  Content-based retrieval of music and audio , 1997, Other Conferences.

[48]  Alf Gabrielsson The Relationship between Musical Structure and Perceived Expression , 2008 .

[49]  Cynthia Whissell,et al.  THE DICTIONARY OF AFFECT IN LANGUAGE , 1989 .

[50]  J. Kruskal Nonmetric multidimensional scaling: A numerical method , 1964 .

[51]  Klaus R. Scherer,et al.  Advocating a Componential Appraisal Model to Guide Emotion Recognition , 2012, Int. J. Synth. Emot..

[52]  Homer H. Chen,et al.  Music Emotion Recognition , 2011 .

[53]  L. Wedin,et al.  Dimension analysis of the perception of instrumental timbre. , 1972, Scandinavian journal of psychology.

[54]  Youngmoo E. Kim,et al.  Prediction of Time-varying Musical Mood Distributions from Audio , 2010, ISMIR.

[55]  W. Thompson,et al.  Can Composers Express Emotions through Music? , 1992 .

[56]  Tuomas Eerola,et al.  Measuring music-induced emotion , 2011 .

[57]  Tuomas Eerola,et al.  Generalizability and Simplicity as Criteria in Feature Selection: Application to Mood Classification in Music , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[58]  Mark B. Sandler,et al.  The Music Ontology , 2007, ISMIR.

[59]  In-Kwon Lee,et al.  Affecticon: Emotion-Based Icons for Music Retrieval , 2011, IEEE Computer Graphics and Applications.

[60]  J. Grey Timbre discrimination in musical patterns. , 1978, The Journal of the Acoustical Society of America.

[61]  K. Scherer,et al.  Appraisal processes in emotion: Theory, methods, research. , 2001 .

[62]  C. Krumhansl,et al.  Isolating the dynamic attributes of musical timbre. , 1993, The Journal of the Acoustical Society of America.

[63]  Xing Wang,et al.  Music Emotion Classification of Chinese Songs based on Lyrics Using TF*IDF and Rhyme , 2011, ISMIR.

[64]  Tao Li,et al.  Detecting emotion in music , 2003, ISMIR.

[65]  Yi-Hsuan Yang,et al.  Exploiting online music tags for music emotion classification , 2011, TOMCCAP.

[66]  Hiroko Terasawa,et al.  A statistical model of timbre perception , 2006, SAPA@INTERSPEECH.

[67]  David Hutchison,et al.  FLUENTLY REMIXING MUSICAL OBJECTS WITH HIGHER-ORDER FUNCTIONS , 2009 .

[68]  H. Schlosberg The description of facial expressions in terms of two dimensions. , 1952, Journal of experimental psychology.

[69]  J. Sloboda,et al.  Psychological perspectives on music and emotion , 2001 .

[70]  S. Gosling,et al.  Message in a Ballad , 2006, Psychological science.

[71]  Carol L. Krumhansl,et al.  Plink: "Thin Slices" of Music , 2010 .

[72]  Emery Schubert Measuring emotion continuously: Validity and reliability of the two-dimensional emotion-space , 1999 .

[73]  Bob Snyder Memory for Music , 2008 .

[74]  Yi-Hsuan Yang,et al.  Music emotion classification: a fuzzy approach , 2006, MM '06.

[75]  Beth Logan,et al.  Mel Frequency Cepstral Coefficients for Music Modeling , 2000, ISMIR.

[76]  Grigorios Tsoumakas,et al.  Multi-Label Classification of Music into Emotions , 2008, ISMIR.

[77]  Robert O. Gjerdingen,et al.  Scanning the Dial: The Rapid Recognition of Music Genres , 2008 .

[78]  Murray S. Miron,et al.  Cross-Cultural Universals of Affective Meaning , 1975 .

[79]  Youngmoo E. Kim,et al.  Modeling Musical Emotion Dynamics with Conditional Random Fields , 2011, ISMIR.

[80]  Craig A. Smith,et al.  Appraisal theory: Overview, assumptions, varieties, controversies. , 2001 .

[81]  Piotr Synak,et al.  Multi-Label Classification of Emotions in Music , 2006, Intelligent Information Systems.

[82]  François Pachet,et al.  Exploring Billions of Audio Features , 2007, 2007 International Workshop on Content-Based Multimedia Indexing.

[83]  Tobias Brosch,et al.  Culture‐specific appraisal biases contribute to emotion dispositions , 2009 .

[84]  C. Osgood,et al.  The Measurement of Meaning , 1958 .

[85]  Yi-Hsuan Yang,et al.  A Regression Approach to Music Emotion Recognition , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[86]  Francis F. Li,et al.  Music Mood Classification of Television Theme Tunes , 2011, ISMIR.

[87]  Thierry Bertin-Mahieux,et al.  The Million Song Dataset , 2011, ISMIR.