Investigating the Evaluative Dimensions of a Large Set of Communicative Facial Expressions: A Comparison of Lab-Based and Crowd-Sourced Data Collection

Facial expressions form one of the most important non-verbal communication channels. Although humans are capable of producing a wide range of facial expressions, research in psychology has almost exclusively focused on the so-called basic, emotional expressions (anger, disgust, fear, happy, sad, and surprise). Research into the full range of communicative expressions, however, may be prohibitive due to the large number of stimuli required for testing. Here, we conducted both a lab-based and an online, crowd-sourcing study in which participants rated videos of communicative facial expressions according to 13 evaluative dimensions (arousal, audience, dominance, dynamics, empathy, familiarity, masculinity, naturalness, persuasiveness, politeness, predictability, sincerity, and valence). Twenty-seven different facial expressions displayed by 6 actors were selected from the KU FacialExpression-Database (Shin et al., 2012) as stimuli. For the lab-based experiment, 20 participants rated all 162 (randomized) video stimuli. The crowd-sourced experiment was run on Amazon Mechanical-Turk with 423 participants, selected as to gather a total of 20 ratings per stimulus. Within-group reliability was high for both groups (r_Lab=.772, r_Mturk=.727 averaged across 13 dimensions), with valence, arousal, politeness, and dynamics being highly reliable measures (r>.8), whereas masculinity, predictability, and naturalness where comparatively less reliable (.3<r<.6). Importantly, across-group correlations showed a highly similar pattern. Our results first show that it is feasible to conduct large-scale rating experiments using crowdsourcing stimuli. Additionally, the ratings paint a complex picture of how facial expressions are evaluated. Future studies will use dimensionality analyses to further investigate the full space of human communicative expressions.