PACO: a Corpus to Analyze the Impact of Common Ground in Spontaneous Face-to-Face Interaction

PAC0 is a French audio-video conversational corpus made of 15 face-to-face dyadic interactions, lasting around 20 min each. This compared corpus has been created in order to explore the impact of the lack of personal common ground (Clark, 1996) on participants collaboration during conversation and specifically on their smile during topic transitions. We have constituted this conversational corpus " PACO” by replicating the experimental protocol of “Cheese!” (Priego-valverde & al.,2018). The only difference that distinguishes these two corpora is the degree of CG of the interlocutors: in Cheese! interlocutors are friends, while in PACO they do not know each other. This experimental protocol allows to analyze how the participants are getting acquainted. This study brings two main contributions. First, the PACO conversational corpus enables to compare the impact of the interlocutors’ common ground. Second, the semi-automatic smile annotation protocol allows to obtain reliable and reproducible smile annotations while reducing the annotation time by a factor 10. Keywords : Common ground, spontaneous interaction, smile, automatic detection.

[1]  P. Ekman,et al.  Facial Action Coding System: Manual , 1978 .

[2]  Brigitte Bigi,et al.  SPPAS - MULTI-LINGUAL APPROACHES TO THE AUTOMATIC ANNOTATION OF SPEECH , 2015 .

[3]  Sascha Fagel,et al.  Effects of Smiling on Articulation: Lips, Larynx and Acoustics , 2009, COST 2102 Training School.

[4]  Salvatore Attardo,et al.  Prosodic and multimodal markers of humor in conversation , 2011 .

[5]  Véronique Aubergé,et al.  Can we hear the prosody of smile? , 2003, Speech Commun..

[6]  Paul Boersma,et al.  Praat: doing phonetics by computer , 2003 .

[7]  S. Attardo,et al.  Smiling, gaze, and humor in conversation: A pilot study , 2016 .

[8]  A. J. Fridlund Human Facial Expression: An Evolutionary View , 1994 .

[9]  Y. Joanette,et al.  Analysis of Conversational Topic Shifts: A Multiple Case Study , 1997, Brain and Language.

[10]  Marine Riou,et al.  A Methodology for the Identification of Topic Transitions in Interaction , 2015 .

[11]  E. Schegloff,et al.  A simplest systematics for the organization of turn-taking for conversation , 1974 .

[12]  R. Espesser,et al.  Le CID - Corpus of Interactional Data. Annotation et exploitation multimodale de parole conversationnelle [The “Corpus of Interactional Data” (CID) - Multimodal annotation of conversational speech”] , 2008, ICON.

[13]  Philippe Blache,et al.  MarsaTag, a tagger for French written texts and speech transcriptions , 2014 .

[14]  Maja Pantic,et al.  Automatic Analysis of Facial Actions: A Survey , 2019, IEEE Transactions on Affective Computing.

[15]  付伶俐 打磨Using Language,倡导新理念 , 2014 .

[16]  P. Ekman,et al.  Unmasking the face : a guide to recognizing emotions from facial clues , 1975 .

[17]  J. Bavelas,et al.  Multi-modal communication of common ground: A review of social functions , 2017 .

[18]  C. Meunier,et al.  Automatic Segmentation of Spontaneous Speech / Segmentação automática da fala espontânea , 2018, REVISTA DE ESTUDOS DA LINGUAGEM.

[19]  V. Tartter Happy talk: Perceptual and acoustic effects of smiling on speech , 1980, Perception & psychophysics.

[20]  L. Mondada,et al.  Traitement du topic, processus énonciatifs et séquences conversationnelles , 1995 .

[21]  Peter Wittenburg,et al.  Annotation by Category: ELAN and ISO DCR , 2008, LREC.

[22]  A G N,et al.  Bibliographical References , 1965 .

[23]  Louis-Philippe Morency,et al.  OpenFace 2.0: Facial Behavior Analysis Toolkit , 2018, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).

[24]  Kevin El Haddad,et al.  Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and Roles , 2019, ICMI.