Affective Characterization of Movie Scenes Based on Content Analysis and Physiological Changes

In this paper, we propose an approach for affective characterization of movie scenes based on the emotions that are actually felt by spectators. Such a representation can be used to characterize the emotional content of video clips in application areas such as affective video indexing and retrieval, and neuromarketing studies. A dataset of 64 different scenes from eight movies was shown to eight participants. While watching these scenes, their physiological responses were recorded. The participants were asked to self-assess their felt emotional arousal and valence for each scene. In addition, content-based audio- and video-based features were extracted from the movie scenes in order to characterize each scene. Degrees of arousal and valence were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based features. We showed that a significant correlation exists between valence-arousal provided by the spectator's self-assessments, and affective grades obtained automatically from either physiological responses or from audio-video features. By means of an analysis of variance (ANOVA), the variation of different participants' self assessments and different gender groups self assessments for both valence and arousal were shown to be significant (p-values lower than 0.005). These affective characterization results demonstrate the ability of using multimedia features and physiological responses to predict the expected affect of the user in response to the emotional video content.

[1]  A. Damasio,et al.  Basic emotions are associated with distinct patterns of cardiorespiratory activity. , 2006, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.

[2]  Guillaume Chanel,et al.  Multimodal focus attention and stress detection and feedback in an augmented driver simulator , 2006, Personal and Ubiquitous Computing.

[3]  Jennifer A. Healey,et al.  Wearable and automotive systems for affect recognition from physiology , 2000 .

[4]  Loong Fah Cheong,et al.  Affective understanding in film , 2006, IEEE Trans. Circuits Syst. Video Technol..

[5]  Yuan-Pin Lin,et al.  Interactive content presentation based on expressed emotion and physiological feedback , 2008, ACM Multimedia.

[6]  J. Russell,et al.  A cross-cultural study of a circumplex model of affect. , 1989 .

[7]  Thierry Pun,et al.  Information-theoretic temporal segmentation of video and applications: multiscale keyframes selection and shot boundaries detection , 2006, Multimedia Tools and Applications.

[8]  Ling-Yu Duan,et al.  Hierarchical movie affective content analysis based on arousal and valence features , 2008, ACM Multimedia.

[9]  Shiliang Zhang,et al.  Affective MTV analysis based on arousal and valence features , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[10]  Thierry Pun,et al.  Valence-arousal evaluation using physiological signals in an emotion recall paradigm , 2007, 2007 IEEE International Conference on Systems, Man and Cybernetics.

[11]  Guillaume Chanel,et al.  Emotion Assessment: Arousal Evaluation Using EEG's and Peripheral Physiological Signals , 2006, MRCS.

[12]  L. Rubin The Mechanism of Human Facial Expression , 1992 .

[13]  Daniela Gorski Trevisan,et al.  Multimodal focus attention and stress detection and feedback in an augmented driver simulator , 2007, Personal and Ubiquitous Computing.

[14]  R. A. Mcfarland Relationship of skin temperature changes to the emotions accompanying music , 1985, Biofeedback and self-regulation.

[15]  Thierry Pun,et al.  A channel selection method for EEG classification in emotion assessment based on synchronization likelihood , 2007, 2007 15th European Signal Processing Conference.

[16]  F. H. Kanfer Verbal rate, eyeblink, and content in structured psychiatric interviews. , 1960, Journal of abnormal and social psychology.

[17]  Paul Boersma,et al.  Praat: doing phonetics by computer , 2003 .

[18]  Rosalind W. Picard,et al.  Evaluating affective interactions: Alternatives to asking what users feel , 2005 .

[19]  Lie Lu,et al.  A robust audio classification and segmentation method , 2001, MULTIMEDIA '01.

[20]  Paul Dourish,et al.  How emotion is made and measured , 2007, Int. J. Hum. Comput. Stud..

[21]  K. Clark Looking at Pictures , 1960 .

[22]  Yaser Sheikh,et al.  On the use of computable features for film classification , 2005, IEEE Transactions on Circuits and Systems for Video Technology.

[23]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevance Vector Machine , 2001 .

[24]  M. Bradley,et al.  Looking at pictures: affective, facial, visceral, and behavioral reactions. , 1993, Psychophysiology.

[25]  Paul Boersma,et al.  Praat, a system for doing phonetics by computer , 2002 .

[26]  Ishwar K. Sethi,et al.  Classification of general audio data for content-based retrieval , 2001, Pattern Recognit. Lett..

[27]  Rosalind W. Picard Affective computing: (526112012-054) , 1997 .

[28]  George N. Votsis,et al.  Emotion recognition in human-computer interaction , 2001, IEEE Signal Process. Mag..

[29]  Lei Chen,et al.  Mixed Type Audio Classification with Support Vector Machine , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[30]  Elisabeth André,et al.  Emotion recognition based on physiological changes in music listening , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[31]  Jon D. Morris Observations: SAM: The Self-Assessment Manikin An Efficient Cross-Cultural Measurement Of Emotional Response 1 , 1995 .

[32]  J. Russell,et al.  Evidence for a three-factor theory of emotions , 1977 .

[33]  Rosalind W. Picard Affective Computing , 1997 .

[34]  Kazuhiko Takahashi Remarks on Emotion Recognition from Bio-Potential Signals , 2004 .

[35]  J. Gross,et al.  Emotion elicitation using films , 1995 .

[36]  Alan Hanjalic,et al.  Affective video content representation and modeling , 2005, IEEE Transactions on Multimedia.