Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) corpus

In this chapter we outline the requirements for a systematic corpus of actor portrayals and describe the development, recording, editing, and validating of a major new corpus, the Geneva Multimodal Emotion Portrayal (GEMEP). This corpus consists of more than 7,000 audio-video emotion portrayals, representing 18 emotions (including rarely studied subtle emotions), portrayed by 10 professional actors who were coached by a professional director. The portrayals are recorded with optimal digital quality in multiple modalities, using both pseudo linguistic utterances and affect bursts. In addition, the corpus includes stimuli with systematically varied intensity levels, as well as instances of masked expressions. From the total corpus, 1,260 portrayals were selected and submitted to a first rating procedure in different modalities to establish validity in terms of inter-judge reliability and recognition accuracy. The results show that the portrayed expressions are recognized by lay judges with an accuracy level that, in the case of all emotions, largely exceeded chance and that compares very favorably with published tests of emotion recognition that use highly selected stimulus sets. The portrayals also reach very satisfactory levels of inter-rater reliability for category judgments and ratings of believability and intensity of the portrayals. The validity of the corpus is further confirmed by replicating results in earlier work on the role of expression modality and the corresponding communication channel for cue utilization in emotion recognition. We show that, as expected, the highest accuracy results if both auditory and visual information (voice, face, and gestures) is available, but that sizeable accuracy is achieved even when only one modality is available. The video modality is slightly superior to the audio modality, probably reflecting the fact that facial and gestural cues are more discrete and iconic than vocal cues. However, there are important interactions between emotion and modality, as particular emotions seem to be preferentially communicated by visual or audio cues. The results also raise important issues concerning the relationships

[1]  G. Arundale The Psychology of the Emotions , 1898, Nature.

[2]  P. Ekman Pictures of Facial Affect , 1976 .

[3]  K. Scherer Personality inference from voice quality: The loud voice of extroversion. , 1978 .

[4]  R. Rosenthal Sensitivity to Nonverbal Communication: The PONS Test , 1979 .

[5]  P. Ekman,et al.  Handbook of methods in nonverbal behavior research , 1982 .

[6]  K. Scherer,et al.  Recognition of emotion from vocal cues. , 1986, Archives of general psychiatry.

[7]  Robert Rosenthal,et al.  Judgment Studies: Design, Analysis, and Meta-Analysis , 1987 .

[8]  D. Rubin,et al.  Effect Size Estimation for One-Sample Multiple-Choice-Type Data: Design, Analysis, and Meta-Analysis , 1989 .

[9]  Klaus R. Scherer,et al.  What does facial expression express , 1992 .

[10]  S. Nowicki,et al.  Individual differences in the nonverbal communication of affect: The diagnostic analysis of nonverbal accuracy scale , 1994 .

[11]  P. Gosselin,et al.  Components and recognition of facial expression in the communication of emotion by actors. , 1995, Journal of personality and social psychology.

[12]  K. Scherer,et al.  Acoustic profiles in vocal emotion expression. , 1996, Journal of personality and social psychology.

[13]  P. Ekman,et al.  Matsumoto and Ekman's Japanese and Caucasian Facial Expressions of Emotion (JACFEE): Reliability Data and Cross-National Differences , 1997 .

[14]  K. Scherer Appraisal considered as a process of multilevel sequential checking. , 2001 .

[15]  Blockin,et al.  Vocal Expression of Emotion , 2004 .

[16]  T. Dalgleish Basic Emotions , 2004 .

[17]  K. Scherer What are emotions? And how can they be measured? , 2005 .

[18]  K. Scherer,et al.  Vocal expression of affect , 2005 .

[19]  Klaus R. Scherer,et al.  Using Actor Portrayals to Systematically Study Multimodal Emotion Expression: The GEMEP Corpus , 2007, ACII.

[20]  K. Scherer,et al.  Are facial expressions of emotion produced by categorical affect programs or dynamically driven by appraisal? , 2007, Emotion.

[21]  P. Philippot,et al.  A validated battery of vocal emotional expressions. , 2007 .

[22]  K. Scherer,et al.  The New Handbook of Methods in Nonverbal Behavior Research , 2008 .

[23]  Judith A. Hall,et al.  Accuracy of judging others' traits and states: Comparing mean levels across tests , 2008 .

[24]  K. Scherer,et al.  Appraisal-driven somatovisceral response patterning: Effects of intrinsic pleasantness and goal conduciveness , 2008, Biological Psychology.

[25]  L. Leyman,et al.  The Karolinska Directed Emotional Faces: A validation study , 2008 .

[26]  Skyler T. Hawk,et al.  "Worth a thousand words": absolute and relative decoding of nonlinguistic affect vocalizations. , 2009, Emotion.

[27]  K. Scherer,et al.  Emotion recognition from expressions in face, voice, and body: the Multimodal Emotion Recognition Test (MERT). , 2009, Emotion.

[28]  K. Scherer,et al.  Sequential unfolding of novelty and pleasantness appraisals of odors: evidence from facial electromyography and autonomic reactions. , 2009, Emotion.

[29]  Dimitri Van De Ville,et al.  Decoding of Emotional Information in Voice-Sensitive Cortices , 2009, Current Biology.

[30]  K. Scherer,et al.  Bodily expression of emotion , 2009 .

[31]  K. Scherer The dynamic architecture of emotion: Evidence for the component process model , 2009 .

[32]  K. Scherer,et al.  Beyond arousal: valence and potency/control cues in the vocal expression of emotion. , 2010, The Journal of the Acoustical Society of America.

[33]  Klaus R. Scherer,et al.  Assessing the Ability to Recognize Facial and Vocal Expressions of Emotion: Construction and Validation of the Emotion Recognition Index , 2011 .

[34]  K. Scherer,et al.  FACSGen: A Tool to Synthesize Emotional Facial Expressions Through Systematic Manipulation of Facial Action Units , 2011 .