In and out of context: Influences of facial expression and context information on emotion attributions

Studies intended to evaluate the relative importance of person information (facial expression) and context information (situation evoking an emotion) in determining emotion attributions of observers have so far, with a few exceptions, used static stimulus presentation (photographs and/or verbal descriptions). Here an attempt is made to study emotion inference processes by using dynamic presentation of stimuli (video). Sixty judges were confronted with 60 video clips depicting an emotional situation followed by an emotional facial expression of an actor/actress in reaction to this situation. These clips were selected from films and TV shows. Three groups of 20 judges each watched (i) only the first takes of the clips depicting an emotion-arousing situation, or (ii) only the second takes of the clips presenting the emotional facial expressions, or (iii) combinations of both takes depicting the emotion-arousing situation followed by the facial expression. Their task was to judge the emotion(s) expressed by the person in the given situation. Results indicate that context information dominated person information in determining emotion attributions. Furthermore, differences were found with respect to the relative discrepancy or consonance of clips (situation information was more important to judges in discrepant compared with consonant clips) and differences due to gender of the actors (when watching actresses, person information dominated judgements; when watching actors, situation information was dominant). Results are discussed with respect to possible differences between static and dynamic stimulus presentation and with respect to the general process of inferring emotions.