Quantitative Study of Music Listening Behavior in a Social and Affective Context

A scientific understanding of emotion experience requires information on the contexts in which the emotion is induced. Moreover, as one of the primary functions of music is to regulate the listener's mood, the individual's short-term music preference may reveal the emotional state of the individual. In light of these observations, this paper presents the first scientific study that exploits the online repository of social data to investigate the connections between a blogger's emotional state, user context manifested in the blog articles, and the content of the music titles the blogger attached to the post. A number of computational models are developed to evaluate the accuracy of different content or context cues in predicting emotional state, using 40,000 pieces of music listening records collected from the social blogging website LiveJournal. Our study shows that it is feasible to computationally model the latent structure underlying music listening and mood regulation. The average area under the receiver operating characteristic curve (AUC) for the content-based and context-based models attains 0.5462 and 0.6851, respectively. The association among user mood, music emotion, and individual's personality is also identified.

[1]  György Fazekas,et al.  Multidisciplinary Perspectives on Music Emotion Recognition: Implications for Content and Context-Based Models , 2012 .

[2]  S. Gosling,et al.  PERSONALITY PROCESSES AND INDIVIDUAL DIFFERENCES The Do Re Mi’s of Everyday Life: The Structure and Personality Correlates of Music Preferences , 2003 .

[3]  Yi-Hsuan Yang,et al.  Ranking-Based Emotion Recognition for Music Organization and Retrieval , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[4]  Stephen E. Robertson,et al.  Okapi at TREC-3 , 1994, TREC.

[5]  Yi-Hsuan Yang,et al.  Machine Recognition of Music Emotion: A Review , 2012, TIST.

[6]  Jeffrey J. Scott,et al.  MUSIC EMOTION RECOGNITION: A STATE OF THE ART REVIEW , 2010 .

[7]  Rafael A. Calvo,et al.  Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.

[8]  Akiko Aizawa,et al.  An information-theoretic perspective of tf-idf measures , 2003, Inf. Process. Manag..

[9]  Stephen E. Robertson,et al.  Okapi at TREC-4 , 1995, TREC.

[10]  Adrian C. North,et al.  Why do we listen to music? A uses and gratifications analysis. , 2011, British journal of psychology.

[11]  Hatice Gunes,et al.  Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space , 2011, IEEE Transactions on Affective Computing.

[12]  J. Sloboda,et al.  The functions of music for affect regulation , 2011 .

[13]  Bill Tomlinson,et al.  PersonalSoundtrack: context-aware playlists that adapt to user pace , 2006, CHI Extended Abstracts.

[14]  Max Coltheart,et al.  The MRC Psycholinguistic Database , 1981 .

[15]  Zhi-Hua Zhou,et al.  Exploratory Under-Sampling for Class-Imbalance Learning , 2006, Sixth International Conference on Data Mining (ICDM'06).

[16]  Joseph Kaye,et al.  Understanding how bloggers feel: recognizing affect in blog posts , 2006, CHI Extended Abstracts.

[17]  Kiyoaki Shirai,et al.  Machine Learning Approaches for Mood Classification of Songs toward Music Search Engine , 2009, 2009 International Conference on Knowledge and Systems Engineering.

[18]  Yi-Hsuan Yang,et al.  The acoustic emotion gaussians model for emotion-based music annotation and retrieval , 2012, ACM Multimedia.

[19]  Marilyn A. Walker,et al.  Using Linguistic Cues for the Automatic Recognition of Personality in Conversation and Text , 2007, J. Artif. Intell. Res..

[20]  Martin Clayton The Social and Personal Functions of Music in Cross-Cultural Perspective , 2008 .

[21]  Regan L. Mandryk,et al.  An In-Situ Study of Real-Life Listening Context , 2012 .

[22]  Asako Miura,et al.  Psychological and Social Influences on Blog Writing: An Online Survey of Blog Authors in Japan , 2007, J. Comput. Mediat. Commun..

[23]  Daniel P. W. Ellis,et al.  Signal Processing for Music Analysis , 2011, IEEE Journal of Selected Topics in Signal Processing.

[24]  Robert B. Allen,et al.  Mood-Optimizing Strategies in Aesthetic-Choice Behavior , 1985 .

[25]  K. Scherer,et al.  Emotions evoked by the sound of music: characterization, classification, and measurement. , 2008, Emotion.

[26]  A. Furnham,et al.  Personality, self-estimated intelligence, and uses of music: A Spanish replication and extension using structural equation modeling. , 2009 .

[27]  Chih-Jen Lin,et al.  LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..

[28]  Andreas Rauber,et al.  Facilitating Comprehensive Benchmarking Experiments on the Million Song Dataset , 2012, ISMIR.

[29]  Björn W. Schuller,et al.  Multi-Modal Non-Prototypical Music Mood Analysis in Continuous Space: Reliability and Performances , 2011, ISMIR.

[30]  Gregory Fouts,et al.  Music Preferences, Personality Style, and Developmental Issues of Adolescents , 2003 .

[31]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[32]  Lie Lu,et al.  Automatic mood detection and tracking of music audio signals , 2006, IEEE Transactions on Audio, Speech, and Language Processing.

[33]  R. Zatorre,et al.  Anatomically distinct dopamine release during anticipation and experience of peak emotion to music , 2011, Nature Neuroscience.

[34]  Fabrizio Sebastiani,et al.  Machine learning in automated text categorization , 2001, CSUR.

[35]  Daniel J Levitin,et al.  The structure of musical preferences: a five-factor model. , 2011, Journal of personality and social psychology.

[36]  J. Russell A circumplex model of affect. , 1980 .

[37]  Rynson W. H. Lau,et al.  Multimedia and Signal Processing , 2012, Communications in Computer and Information Science.

[38]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[39]  Mehryar Mohri,et al.  AUC Optimization vs. Error Rate Minimization , 2003, NIPS.

[40]  Xiao Hu,et al.  Improving music mood classification using lyrics, audio and social tags , 2010 .

[41]  E. Schellenberg,et al.  Misery loves company: mood-congruent emotional responding to music. , 2011, Emotion.

[42]  Kim F. Nimon,et al.  Interpreting Multiple Linear Regression: A Guidebook of Variable Importance , 2012 .

[43]  A. Tellegen,et al.  An alternative "description of personality": the big-five factor structure. , 1990, Journal of personality and social psychology.

[44]  Jian Su,et al.  Supervised and Traditional Term Weighting Methods for Automatic Text Categorization , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[45]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[46]  E. Glenn Schellenberg,et al.  Liking unfamiliar music: Effects of felt emotion and individual differences. , 2012 .

[47]  Bonnie A. Nardi,et al.  Why we blog , 2004, CACM.

[48]  Elisabeth André,et al.  Emotion recognition based on physiological changes in music listening , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[49]  Van den Tol,et al.  A self-regulatory perspective on people's decision to engage in listening to self-selected sad music when feeling sad , 2012 .

[50]  M. Bradley,et al.  Affective Norms for English Words (ANEW): Instruction Manual and Affective Ratings , 1999 .

[51]  N. Bolger,et al.  Diary methods: capturing life as it is lived. , 2003, Annual review of psychology.

[52]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[53]  Densil Cabrera,et al.  'Psysound3': Software for Acoustical and Psychoacoustical Analysis of Sound Recordings , 2007 .

[54]  Jaana Kekäläinen,et al.  Cumulated gain-based evaluation of IR techniques , 2002, TOIS.

[55]  Petri Toiviainen,et al.  MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio , 2007, ISMIR.

[56]  James W. Pennebaker,et al.  Linguistic Inquiry and Word Count (LIWC2007) , 2007 .

[57]  Adrian C. North,et al.  The Functions of Music in Everyday Life: Redefining the Social in Music Psychology , 1999 .

[58]  Yiran Chen,et al.  Quantitative Study of Individual Emotional States in Social Networks , 2012, IEEE Transactions on Affective Computing.

[59]  C. Laurier Automatic Classification of musical mood by content-based analysis , 2011 .

[60]  Francesco Ricci,et al.  Contextual music information retrieval and recommendation: State of the art and challenges , 2012, Comput. Sci. Rev..

[61]  P. Laukka,et al.  Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening , 2004 .

[62]  A. Gabrielsson Emotion perceived and emotion felt: Same or different? , 2001 .

[63]  Yi-Hsuan Yang,et al.  Exploiting online music tags for music emotion classification , 2011, TOMCCAP.

[64]  Thomas Hofmann,et al.  Probabilistic latent semantic indexing , 1999, SIGIR '99.

[65]  Youngmoo E. Kim,et al.  Exploring automatic music annotation with "acoustically-objective" tags , 2010, MIR '10.

[66]  Tal Yarkoni Personality in 100,000 Words: A large-scale analysis of personality and word use among bloggers. , 2010, Journal of research in personality.

[67]  Michael L. Littman,et al.  Measuring praise and criticism: Inference of semantic orientation from association , 2003, TOIS.