Cross-cultural Mood Perception in Pop Songs and its Alignment with Mood Detection Algorithms

Do people from different cultural backgrounds perceive the mood in music the same way? How closely do human ratings across different cultures approximate automatic mood detection algorithms that are often trained on corpora of predominantly Western popular music? Analyzing 166 participants’ responses from Brazil, South Korea, and the US, we examined the similarity between the ratings of nine categories of perceived moods in music and estimated their alignment with four popular mood detection algorithms. We created a dataset of 360 recent pop songs drawn from major music charts of the countries and constructed semantically identical mood descriptors across English, Korean, and Portuguese languages. Multiple participants from the three countries rated their familiarity, preference, and perceived moods for a given song. Ratings were highly similar within and across cultures for basic mood attributes such as sad, cheerful, and energetic. However, we found significant cross-cultural differences for more complex characteristics such as dreamy and love. To our surprise, the results of mood detection algorithms were uniformly correlated across human ratings from all three countries and did not show a detectable bias towards any particular culture. Our study thus suggests that the mood detection algorithms can be considered as an objective measure at least within the popular music context.

[1]  Daniel G. Brown,et al.  On Cultural, Textual and Experiential Aspects of Music Mood , 2014, ISMIR.

[2]  Joshua K. Hartshorne,et al.  Universality and diversity in human song , 2019, Science.

[3]  Ichiro Fujinaga,et al.  An Expert Ground Truth Set for Audio Chord Recognition and Music Analysis , 2011, ISMIR.

[4]  J. Russell A circumplex model of affect. , 1980 .

[5]  Seán G. Roberts,et al.  Cultural influences on word meanings revealed through large-scale semantic alignment , 2020, Nature Human Behaviour.

[6]  Birk Diedenhofen,et al.  cocor: A Comprehensive Solution for the Statistical Comparison of Correlations , 2015, PloS one.

[7]  Zhaoxia Yu,et al.  Musical trends and predictability of success in contemporary songs in and out of the top charts , 2018, Royal Society Open Science.

[8]  Alan S. Cowen,et al.  UvA-DARE (Digital Academic Repository) What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures , 2022 .

[9]  György Fazekas,et al.  A Study of Cultural Dependence of Perceived Mood in Greek Music , 2013, ISMIR.

[10]  Kelly Jakubowski,et al.  Music-evoked autobiographical memories in everyday life , 2019 .

[11]  I. Peretz,et al.  Universal Recognition of Three Basic Emotions in Music , 2009, Current Biology.

[12]  Tuomas Eerola,et al.  Modeling Emotions in Music: Advances in Conceptual, Contextual and Validity Issues , 2014, Semantic Audio.

[13]  C. Spearman The proof and measurement of association between two things. , 2015, International journal of epidemiology.

[14]  Thomas E. Currie,et al.  Statistical universals reveal the structures and functions of human music , 2015, Proceedings of the National Academy of Sciences.

[15]  K. Scherer,et al.  Emotions evoked by the sound of music: characterization, classification, and measurement. , 2008, Emotion.

[16]  Gerhard Widmer,et al.  Towards Explainable Music Emotion Recognition: The Route via Mid-level Features , 2019, ISMIR.

[17]  K. McGraw,et al.  Forming inferences about some intraclass correlation coefficients. , 1996 .

[18]  Josh H. McDermott,et al.  Universal and Non-universal Features of Musical Pitch Perception Revealed by Singing , 2019, Current Biology.

[19]  Peter M C Harrison,et al.  Gibbs Sampling with People , 2020, NeurIPS.

[20]  Emery Schubert The influence of emotion, locus of emotion and familiarity upon preference in music , 2007 .

[21]  Josh H McDermott,et al.  Headphone screening to facilitate web-based auditory experiments , 2017, Attention, Perception, & Psychophysics.

[22]  Xavier Serra,et al.  Essentia: An Audio Analysis Library for Music Information Retrieval , 2013, ISMIR.

[23]  Exploring Acoustic Similarity for Novel Music Recommendation , 2020, ISMIR.

[24]  W. Thompson,et al.  A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues , 1999 .

[25]  C. B. Colby The weirdest people in the world , 1973 .

[26]  Matthias Mauch,et al.  The Minor fall, the Major lift: inferring emotional valence of musical chords through lyrics , 2015, Royal Society Open Science.

[27]  Feng Su,et al.  Multimodal Music Mood Classification by Fusion of Audio and Lyrics , 2015, MMM.

[28]  Johann-Mattis List,et al.  Emotion semantics show both cultural variation and universal structure , 2019, Science.

[29]  Xiao Hu,et al.  A Cross-cultural Study of Music Mood Perception between American and Chinese Listeners , 2012, ISMIR.

[30]  Thierry Bertin-Mahieux,et al.  The Million Song Dataset , 2011, ISMIR.

[31]  Henriette Cramer,et al.  Global music streaming data reveal diurnal and seasonal patterns of affective preference , 2019, Nature Human Behaviour.

[32]  Norbert K Tanzer,et al.  Bias and equivalence , 2000 .

[33]  Kristin Lemhöfer,et al.  Introducing LexTALE: A quick and valid Lexical Test for Advanced Learners of English , 2011, Behavior research methods.

[34]  Henriette Cramer,et al.  Local Trends in Global Music Streaming , 2020, ICWSM.

[35]  Juan Sebastián Gómez Cañón,et al.  Joyful for you and tender for us: the influence of individual characteristics and language on emotion labeling and classification , 2020, ISMIR.

[36]  Josh H. McDermott,et al.  Indifference to dissonance in native Amazonians reveals cultural variation in music perception , 2016, Nature.

[37]  T. Eerola,et al.  A comparison of the discrete and dimensional models of emotion in music , 2011 .

[38]  Yi-Hsuan Yang,et al.  Hit Song Prediction: Leveraging Low- and High-Level Audio Features , 2019, ISMIR.

[39]  J. Stephen Downie,et al.  A Cross-Cultural Study on the Mood of K-POP Songs , 2014, ISMIR.

[40]  Cynthia C. S. Liem,et al.  Can't trust the feeling? How open data reveals unexpected behavior of high-level music descriptors , 2020, ISMIR.