Analysing Emotions in Schubert's Erlkönig: a Computational Approach

The present article provides an outline of the computational prediction of emotions expressed by music. A recent model based on the extraction of acoustic features derived from film music is used to demonstrate each phase of the model's construction, and the musical relevance and predictive accuracy of such models are discussed. This model, as well as a stylistically more appropriate model based on piano performances, is applied to a segmented analysis of Schubert's Lied Der Erlkonig. The predictions are compared to analytical insights provided by Spitzer in this issue. The two approaches, music analytical interpretation and computational analysis of expressed emotions, yield often coinciding emotional nuances, particularly when the stylistically appropriate computational model based on piano performances is used. The potential synergies, especially those related to a more comprehensive analysis of timbral characters via computer tools as well as the more refined interpretations that only a music analyst is able to draw, are discussed.

[1]  K. Scherer,et al.  Acoustic profiles in vocal emotion expression. , 1996, Journal of personality and social psychology.

[2]  Emilia Gómez,et al.  Tonal Description of Polyphonic Audio for Music Content Processing , 2006, INFORMS J. Comput..

[3]  C. Harte,et al.  Detecting harmonic change in musical audio , 2006, AMCMM '06.

[4]  K. Scherer,et al.  Acoustic concomitants of emotional expression in operatic singing: the case of Lucia in Ardi gli incensi. , 1995, Journal of voice : official journal of the Voice Foundation.

[5]  W. E. Fredrickson,et al.  A Comparison of Perceived Musical Tension and Aesthetic Response , 1995 .

[6]  X. Serra,et al.  Computational models of music perception and cognition II: Domain-specific music processing , 2008 .

[7]  D. Västfjäll,et al.  Emotional responses to music: the need to consider underlying mechanisms. , 2008, The Behavioral and brain sciences.

[8]  C. Krumhansl An exploratory study of musical emotions and psychophysiology. , 1997, Canadian journal of experimental psychology = Revue canadienne de psychologie experimentale.

[9]  George Tzanetakis,et al.  MARSYAS: a framework for audio analysis , 1999, Organised Sound.

[10]  B. Repp,et al.  Is Recognition of Emotion in Music Performance an Aspect of Emotional Intelligence , 2004 .

[11]  Satoshi Nakamura,et al.  Cepstrum derived from differentiated power spectrum for robust speech recognition , 2003, Speech Commun..

[12]  P. Juslin Emotional Communication in Music Performance: A Functionalist Perspective and Some Data , 1997 .

[13]  T. Eerola,et al.  A comparison of the discrete and dimensional models of emotion in music , 2011 .

[14]  W. Thompson,et al.  A Comparison of Acoustic Cues in Music and Speech for Three Dimensions of Affect , 2006 .

[15]  P. Laukka,et al.  Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening , 2004 .

[16]  P. Juslin,et al.  Emotional Expression in Music Performance: Between the Performer's Intention and the Listener's Experience , 1996 .

[17]  C. Krumhansl,et al.  Tracing the dynamic changes in perceived tonal organization in a spatial representation of musical keys. , 1982 .

[18]  Jonathan Foote,et al.  Media segmentation using self-similarity decomposition , 2003, IS&T/SPIE Electronic Imaging.

[19]  Matthew E. P. Davies,et al.  Evaluation of Audio Beat Tracking and Music Tempo Extraction Algorithms , 2007 .

[20]  K. Kallinen,et al.  Emotion-related effects of speech rate and rising vs. falling background music melody during audio news: the moderating influence of personality , 2004 .

[21]  Anssi Klapuri,et al.  Automatic Music Transcription as We Know it Today , 2004 .

[22]  Emery Schubert Modeling Perceived Emotion With Continuous Musical Features , 2004 .

[23]  Elias Pampalk,et al.  Content-based organization and visualization of music archives , 2002, MULTIMEDIA '02.

[24]  N. Scaringella,et al.  Automatic genre classification of music content: a survey , 2006, IEEE Signal Process. Mag..

[25]  Patrik N. Juslin,et al.  Five Facets of Musical Expression: A Psychologist's Perspective on Music Performance , 2003 .

[26]  Yonatan Malin Metric Displacement Dissonance and Romantic Longing in the German Lied , 2006 .

[27]  P. Laukka,et al.  Communication of emotions in vocal expression and music performance: different channels, same code? , 2003, Psychological bulletin.

[28]  Elias Pampalk,et al.  An Implementation of a Simple Playlist Generator Based on Audio Similarity Measures and User Feedback , 2006, ISMIR.

[29]  Xavier Serra,et al.  Computational models of music perception and cognition I: the perceptual and cognitive processing chain , 2008 .

[30]  George Tzanetakis,et al.  Musical genre classification of audio signals , 2002, IEEE Trans. Speech Audio Process..

[31]  Gregory H. Wakefield,et al.  Audio thumbnailing of popular music using chroma-based representations , 2005, IEEE Transactions on Multimedia.

[32]  W. Thompson,et al.  Can Composers Express Emotions through Music? , 1992 .

[33]  A. Grob,et al.  Dimensional models of core affect: a quantitative comparison by means of structural equation modeling , 2000 .

[34]  Nicholas Cook,et al.  Computational and Comparative Musicology , 2004 .

[35]  I. Peretz,et al.  A developmental study of the affective value of tempo and mode in music , 2001, Cognition.

[36]  Suzannah Clark Schubert, Theory and Analysis , 2002 .

[37]  Marc Leman,et al.  Prediction of Musical Affect Using a Combination of Acoustic Structural Cues , 2005 .