The Modulation of Cooperation and Emotion in Dialogue: The REC Corpus

In this paper we describe the Rovereto Emotive Corpus (REC) which we collected to investigate the relationship between emotion and cooperation in dialogue tasks. It is an area where still many unsolved questions are present. One of the main open issues is the annotation of the so-called "blended" emotions and their recognition. Usually, there is a low agreement among raters in annotating emotions and, surprisingly, emotion recognition is higher in a condition of modality deprivation (i. e. only acoustic or only visual modality vs. bimodal display of emotion). Because of these previous results, we collected a corpus in which "emotive" tokens are pointed out during the recordings by psychophysiological indexes (ElectroCardioGram, and Galvanic Skin Conductance). From the output values of these indexes a general recognition of each emotion arousal is allowed. After this selection we will annotate emotive interactions with our multimodal annotation scheme, performing a kappa statistic on annotation results to validate our coding scheme. In the near future, a logistic regression on annotated data will be performed to find out correlations between cooperation and negative emotions. A final step will be an fMRI experiment on emotion recognition of blended emotions from face displays.

[1]  Wolfgang Linden,et al.  The importance of examining blood pressure reactivity and recovery in anger provocation research. , 2005, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.

[2]  Christopher A. Dickinson,et al.  Coordinating cognition: The costs and benefits of shared gaze during collaborative search , 2008, Cognition.

[3]  I. Poggi Mind, hands, face and body. A goal and belief view of multimodal communication , 2007 .

[4]  A. Kendon Some functions of gaze-direction in social interaction. , 1967, Acta psychologica.

[5]  J. Allwood,et al.  A coding scheme for the annotation of feedback , turn management and sequencing phenomena , 2006 .

[6]  Jean Carletta,et al.  Unleashing the killer corpus: experiences in creating the multi-everything AMI Meeting Corpus , 2007, Lang. Resour. Evaluation.

[7]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[8]  Piero Cosi,et al.  Multimodal Score: an ANVIL based Annotation scheme for multimodal audio-video analysis , 2004 .

[9]  Michael Kipp,et al.  ANVIL - a generic annotation tool for multimodal dialogue , 2001, INTERSPEECH.

[10]  Mary McGee Wood,et al.  A Categorical Annotation Scheme for Emotion in the Linguistic Content of Dialogue , 2004, ADS.

[11]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .

[12]  Kristen A. Lindquist,et al.  Opinion TRENDS in Cognitive Sciences Vol.11 No.8 Cognitive-emotional interactions Language as context for the , 2022 .

[13]  Fabio Pianesi,et al.  A multimodal annotated corpus of consensus decision making meetings , 2007, Lang. Resour. Evaluation.

[14]  Anne H. Anderson,et al.  The Hcrc Map Task Corpus , 1991 .

[15]  Garrison W. Cottrell,et al.  Transmitting and Decoding Facial Expressions , 2005, Psychological science.

[16]  David R Traum,et al.  Towards a Computational Theory of Grounding in Natural Language Conversation , 1991 .

[17]  L. Leyman,et al.  The Karolinska Directed Emotional Faces: A validation study , 2008 .

[18]  Roddy Cowie,et al.  Multimodal databases of everyday emotion: facing up to complexity , 2005, INTERSPEECH.

[19]  Michael Neff,et al.  An annotation scheme for conversational gestures: how to economically capture timing and form , 2007, Lang. Resour. Evaluation.

[20]  Costanza Navarretta,et al.  The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena , 2007, Lang. Resour. Evaluation.

[21]  Massimo Poesio,et al.  Standoff Coordination for Multi-Tool Annotation in a Dialogue Corpus , 2007, LAW@ACL.

[22]  Kostas Karpouzis,et al.  Manual annotation and automatic image processing of multimodal emotional behaviors: validating the annotation of TV interviews , 2007, Personal and Ubiquitous Computing.

[23]  Anna Tcherkassof,et al.  Double level analysis of the Multimodal Expressions of Emotions in Human-Machine Interaction , 2008 .

[24]  Anne H. Anderson,et al.  Forms of Introduction in Dialogues: Their Discourse Contexts and Communicative Consequences. , 1994 .

[25]  Dirk Heylen,et al.  On the Contextual Analysis of Agreement Scores , 2009, Multimodal Corpora.

[26]  J. Cacioppo,et al.  The skeletomotor system: Surface electromyography. , 2007 .

[27]  K. Krippendorff Reliability in Content Analysis: Some Common Misconceptions and Recommendations , 2004 .