From crowdsourced rankings to affective ratings

Automatic prediction of emotions requires reliably annotated data which can be achieved using scoring or pairwise ranking. But can we predict an emotional score using a ranking-based annotation approach? In this paper, we propose to answer this question by describing a regression analysis to map crowdsourced rankings into affective scores in the induced valence-arousal emotional space. This process takes advantages of the Gaussian Processes for regression that can take into account the variance of the ratings and thus the subjectivity of emotions. Regression models successfully learn to fit input data and provide valid predictions. Two distinct experiments were realized using a small subset of the publicly available LIRIS-ACCEDE affective video database for which crowdsourced ranks, as well as affective ratings, are available for arousal and valence. It allows to enrich LIRIS-ACCEDE by providing absolute video ratings for the whole database in addition to video rankings that are already available.

[1]  Klaus Krippendorff,et al.  Estimating the Reliability, Systematic Error and Random Error of Interval Data , 1970 .

[2]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[3]  Christopher K. I. Williams,et al.  Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning) , 2005 .

[4]  P. Lang International Affective Picture System (IAPS) : Technical Manual and Affective Ratings , 1995 .

[5]  Seth Ovadia Ratings and rankings: reconsidering the structure of values and their measurement , 2004 .

[6]  Yi-Hsuan Yang,et al.  Ranking-Based Emotion Recognition for Music Organization and Retrieval , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[7]  Kostas Karpouzis,et al.  The HUMAINE Database: Addressing the Collection and Annotation of Naturalistic and Induced Emotional Data , 2007, ACII.

[8]  P. Rousseeuw Least Median of Squares Regression , 1984 .

[9]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[10]  Mohammad Soleymani,et al.  Corpus Development for Affective Video Indexing , 2012, IEEE Transactions on Multimedia.

[11]  Rainer Lienhart,et al.  Comparison of automatic shot boundary detection algorithms , 1998, Electronic Imaging.

[12]  Angeliki Metallinou,et al.  Annotation and processing of continuous emotional attributes: Challenges and opportunities , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[13]  A. Schaefer,et al.  Please Scroll down for Article Cognition & Emotion Assessing the Effectiveness of a Large Database of Emotion-eliciting Films: a New Tool for Emotion Researchers , 2022 .

[14]  M. Bradley,et al.  Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.

[15]  P. Lang,et al.  International Affective Picture System (IAPS): Instruction Manual and Affective Ratings (Tech. Rep. No. A-4) , 1999 .

[16]  Christian Schmid,et al.  A Matlab function to estimate choice model parameters from paired-comparison data , 2004, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[17]  Emmanuel Dellandréa,et al.  A Large Video Database for Computational Models of Induced Emotion , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.