A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

Recently, we released a large affective video dataset, namely LIRIS-ACCEDE, which was annotated through crowdsourcing along both induced valence and arousal axes using pairwise comparisons. In this paper, we design an annotation protocol which enables the scoring of induced affective feelings for cross-validating the annotations of the LIRIS-ACCEDE dataset and identifying any potential bias. We have collected in a controlled setup the ratings from 28 users on a subset of video clips carefully selected from the dataset by computing the inter-observer reliabilities on the crowdsourced data. On contrary to crowdsourced rankings gathered in unconstrained environments, users were asked to rate each video through the Self-Assessment Manikin tool. The significant correlation between crowdsourced rankings and controlled ratings validates the reliability of the dataset for future uses in affective video analysis and paves the way for the automatic generation of ratings over the whole dataset.

[1]  Mohammad Soleymani,et al.  Corpus Development for Affective Video Indexing , 2012, IEEE Transactions on Multimedia.

[2]  B. Fredrickson What Good Are Positive Emotions? , 1998, Review of general psychology : journal of Division 1, of the American Psychological Association.

[3]  Ó. Gonçalves,et al.  The Emotional Movie Database (EMDB): A Self-Report and Psychophysiological Study , 2012, Applied psychophysiology and biofeedback.

[4]  Emmanuel Dellandréa,et al.  From crowdsourced rankings to affective ratings , 2014, 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW).

[5]  K. Vohs,et al.  Case Western Reserve University , 1990 .

[6]  J. Russell Core affect and the psychological construction of emotion. , 2003, Psychological review.

[7]  Daniel McDuff,et al.  Crowdsourcing Facial Responses to Online Videos , 2012, IEEE Transactions on Affective Computing.

[8]  Alan Hanjalic,et al.  Affective video content representation and modeling , 2005, IEEE Transactions on Multimedia.

[9]  A. Schaefer,et al.  Please Scroll down for Article Cognition & Emotion Assessing the Effectiveness of a Large Database of Emotion-eliciting Films: a New Tool for Emotion Researchers , 2022 .

[10]  M. Bradley,et al.  Looking at pictures: affective, facial, visceral, and behavioral reactions. , 1993, Psychophysiology.

[11]  John A. Sloboda,et al.  Empirical studies of emotional response to music. , 1992 .

[12]  M. Bradley,et al.  Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.

[13]  Chin-Laung Lei,et al.  A crowdsourceable QoE evaluation framework for multimedia content , 2009, ACM Multimedia.

[14]  Cha Zhang,et al.  CROWDMOS: An approach for crowdsourcing mean opinion score studies , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[15]  Klaus Krippendorff,et al.  Estimating the Reliability, Systematic Error and Random Error of Interval Data , 1970 .

[16]  Laurel D. Riek,et al.  Guess What? A Game for Affective Annotation of Video Using Crowd Sourcing , 2011, ACII.

[17]  Saif Mohammad,et al.  CROWDSOURCING A WORD–EMOTION ASSOCIATION LEXICON , 2013, Comput. Intell..

[18]  P. Lang International Affective Picture System (IAPS) : Technical Manual and Affective Ratings , 1995 .

[19]  Emmanuel Dellandréa,et al.  A Large Video Database for Computational Models of Induced Emotion , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.

[20]  G. Peeters,et al.  Positive-Negative Asymmetry in Evaluations: The Distinction Between Affective and Informational Negativity Effects , 1990 .

[21]  P. Philippot Inducing and assessing differentiated emotion-feeling states in the laboratory. , 1993, Cognition & emotion.

[22]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[23]  Mohammad Soleymani,et al.  Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus , 2010 .

[24]  Athanasia Zlatintsi,et al.  A supervised approach to movie emotion tracking , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[25]  P. Lang,et al.  International Affective Picture System (IAPS): Instruction Manual and Affective Ratings (Tech. Rep. No. A-4) , 1999 .

[26]  Pavel Korshunov,et al.  Crowdsourcing-based multimedia subjective evaluations: a case study on image recognizability and aesthetic appeal , 2013, CrowdMM '13.