LIRIS-ACCEDE: A Video Database for Affective Content Analysis

Research in affective computing requires ground truth data for training and benchmarking computational models for machine-based emotion understanding. In this paper, we propose a large video database, namely LIRIS-ACCEDE, for affective content analysis and related applications, including video indexing, summarization or browsing. In contrast to existing datasets with very few video resources and limited accessibility due to copyright constraints, LIRIS-ACCEDE consists of 9,800 good quality video excerpts with a large content diversity. All excerpts are shared under creative commons licenses and can thus be freely distributed without copyright issues. Affective annotations were achieved using crowdsourcing through a pair-wise video comparison protocol, thereby ensuring that annotations are fully consistent, as testified by a high inter-annotator agreement, despite the large diversity of raters' cultural backgrounds. In addition, to enable fair comparison and landmark progresses of future affective computational models, we further provide four experimental protocols and a baseline for prediction of emotions using a large set of both visual and audio features. The dataset (the video clips, annotations, features and protocols) is publicly available at: http://liris-accede.ec-lyon.fr/.

[1]  Krista A. Ehinger,et al.  SUN database: Large-scale scene recognition from abbey to zoo , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Kostas Karpouzis,et al.  The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data , 2007, ACII.

[3]  Yi-Hsuan Yang,et al.  Ranking-Based Emotion Recognition for Music Organization and Retrieval , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[4]  Marko Horvat,et al.  Multimedia stimuli databases usage patterns: a survey report , 2013, 2013 36th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO).

[5]  Steven Skiena,et al.  The Algorithm Design Manual , 2020, Texts in Computer Science.

[6]  Alan Hanjalic,et al.  Affective video content representation and modeling , 2005, IEEE Transactions on Multimedia.

[7]  J. Russell A circumplex model of affect. , 1980 .

[8]  Justus J. Randolph Free-Marginal Multirater Kappa (multirater K[free]): An Alternative to Fleiss' Fixed-Marginal Multirater Kappa. , 2005 .

[9]  Ó. Gonçalves,et al.  The Emotional Movie Database (EMDB): A Self-Report and Psychophysiological Study , 2012, Applied psychophysiology and biofeedback.

[10]  Emmanuel Dellandréa,et al.  A Large Video Database for Computational Models of Induced Emotion , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.

[11]  Emmanuel Dellandréa,et al.  From crowdsourced rankings to affective ratings , 2014, 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW).

[12]  Mohammad Soleymani,et al.  A Benchmarking Campaign for the Multimodal Detection of Violent Scenes in Movies , 2012, ECCV Workshops.

[13]  Klaus Krippendorff,et al.  Estimating the Reliability, Systematic Error and Random Error of Interval Data , 1970 .

[14]  Riccardo Leonardi,et al.  Affective Recommendation of Movies Based on Selected Connotative Features , 2013, IEEE Transactions on Circuits and Systems for Video Technology.

[15]  Shiliang Zhang,et al.  Affective Visualization and Retrieval for Music Video , 2010, IEEE Transactions on Multimedia.

[16]  Hang-Bong Kang,et al.  Affective content detection using HMMs , 2003, ACM Multimedia.

[17]  Pierre Hellier,et al.  Saliency-Guided Consistent Color Harmonization , 2013, CCIW.

[18]  A. Hanjalic,et al.  Extracting moods from pictures and sounds: towards truly personalized TV , 2006, IEEE Signal Processing Magazine.

[19]  Kiyoharu Aizawa,et al.  Affective Audio-Visual Words and Latent Topic Driving Model for Realizing Movie Affective Scene Classification , 2010, IEEE Transactions on Multimedia.

[20]  Mohammad Soleymani,et al.  Corpus Development for Affective Video Indexing , 2012, IEEE Transactions on Multimedia.

[21]  K. Scherer,et al.  EMOTIONAL EFFECTS OF MUSIC: PRODUCTION RULES , 2001 .

[22]  Rainer Lienhart,et al.  Comparison of automatic shot boundary detection algorithms , 1998, Electronic Imaging.

[23]  Sabine Süsstrunk,et al.  Measuring colorfulness in natural images , 2003, IS&T/SPIE Electronic Imaging.

[24]  S. Süsstrunk,et al.  Measuring colourfulness in natural images , 2003 .

[25]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .

[26]  M. Bradley,et al.  Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.

[27]  John J. B. Allen,et al.  The handbook of emotion elicitation and assessment , 2007 .

[28]  Fabrice Urban,et al.  Image and Video Saliency Models Improvement by Blur Identification , 2012, ICCVG.

[29]  Mohammad Soleymani,et al.  Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus , 2010 .

[30]  Aline Roumy,et al.  Prediction of the inter-observer visual congruency (IOVC) and application to image ranking , 2011, ACM Multimedia.

[31]  J. Gross,et al.  Emotion elicitation using films , 1995 .

[32]  Athanasia Zlatintsi,et al.  A supervised approach to movie emotion tracking , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[33]  Yan Ke,et al.  The Design of High-Level Features for Photo Quality Assessment , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[34]  K. Scherer,et al.  Emotions evoked by the sound of music: characterization, classification, and measurement. , 2008, Emotion.

[35]  P. A. Russell,et al.  Ranking or rating? Some data and their implications for the measurement of evaluative response , 1994 .

[36]  Loong Fah Cheong,et al.  Affective understanding in film , 2006, IEEE Trans. Circuits Syst. Video Technol..

[37]  Xiaoou Tang,et al.  Photo and Video Quality Evaluation: Focusing on the Subject , 2008, ECCV.

[38]  Stéphane Mallat,et al.  Characterization of Signals from Multiscale Edges , 2011, IEEE Trans. Pattern Anal. Mach. Intell..

[39]  Seth Ovadia Ratings and rankings: reconsidering the structure of values and their measurement , 2004 .

[40]  Shih-Fu Chang,et al.  Predicting Viewer Perceived Emotions in Animated GIFs , 2014, ACM Multimedia.

[41]  J. R. Landis,et al.  The measurement of observer agreement for categorical data. , 1977, Biometrics.

[42]  Alberto Del Bimbo,et al.  Semantics in Visual Information Retrieval , 1999, IEEE Multim..

[43]  Touradj Ebrahimi,et al.  Multimedia content analysis for emotional characterization of music video clips , 2013, EURASIP J. Image Video Process..

[44]  Shiliang Zhang,et al.  Utilizing affective analysis for efficient movie browsing , 2009, 2009 16th IEEE International Conference on Image Processing (ICIP).

[45]  Rongrong Ji,et al.  Video indexing and recommendation based on affective analysis of viewers , 2011, MM '11.

[46]  Jolene D. Smyth,et al.  Comparing Check-All and Forced-Choice Question Formats in Web Surveys , 2006 .

[47]  Georgios N. Yannakakis,et al.  Ranking vs. Preference: A Comparative Study of Self-reporting , 2011, ACII.

[48]  Saif Mohammad,et al.  CROWDSOURCING A WORD–EMOTION ASSOCIATION LEXICON , 2013, Comput. Intell..

[49]  K. Scherer,et al.  The Relationship of Emotion to Cognition: A Functional Approach to a Semantic Controversy , 1987 .

[50]  Peter Y. K. Cheung,et al.  Affective Level Video Segmentation by Utilizing the Pleasure-Arousal-Dominance Information , 2008, IEEE Transactions on Multimedia.

[51]  Mohammad Soleymani,et al.  A Bayesian framework for video affective representation , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[52]  Rafal Mantiuk,et al.  Comparison of Four Subjective Methods for Image Quality Assessment , 2012, Comput. Graph. Forum.

[53]  Junqing Yu,et al.  An improved valence-arousal emotion space for video affective content representation and recognition , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[54]  Angeliki Metallinou,et al.  Annotation and processing of continuous emotional attributes: Challenges and opportunities , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[55]  A. Schaefer,et al.  Please Scroll down for Article Cognition & Emotion Assessing the Effectiveness of a Large Database of Emotion-eliciting Films: a New Tool for Emotion Researchers , 2022 .

[56]  Mohammad Soleymani,et al.  Affective Characterization of Movie Scenes Based on Multimedia Content Analysis and User's Physiological Emotional Responses , 2008, 2008 Tenth IEEE International Symposium on Multimedia.

[57]  Ling-Yu Duan,et al.  Hierarchical movie affective content analysis based on arousal and valence features , 2008, ACM Multimedia.

[58]  John A. Sloboda,et al.  Empirical studies of emotional response to music. , 1992 .

[59]  P. Philippot Inducing and assessing differentiated emotion-feeling states in the laboratory. , 1993, Cognition & emotion.

[60]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[61]  Junqing Yu,et al.  Video Affective Content Representation and Recognition Using Video Affective Tree and Hidden Markov Models , 2007, ACII.

[62]  Mohammad Soleymani,et al.  A Multimodal Database for Affect Recognition and Implicit Tagging , 2012, IEEE Transactions on Affective Computing.

[63]  Emmanuel Dellandréa,et al.  A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset , 2014, CrowdMM '14.