Beyond backchannels: co-construction of dyadic stancce by reciprocal reinforcement of smiles between virtual agents

Beyond backchannels: co-construction of dyadic stance by reciprocal reinforcement of smiles between virtual agents. Ken Prepin (ken.prepin@telecom-paristech.fr) LTCI-CNRS/Telecom-ParisTech, 37-39 rue Dareau 75014 Paris, France Magalie Ochs (magalie.ochs@telecom-paristech.fr) LTCI-CNRS/Telecom-ParisTech, 37-39 rue Dareau 75014 Paris, France Catherine Pelachaud (catherine.pelachaud@telecom-paristech.fr) LTCI-CNRS/Telecom-ParisTech, 37-39 rue Dareau 75014 Paris, France Abstract dyadic stances can be inferred (Prepin, Ochs, & Pelachaud, 2012) from diachronic alignment between interactants. The effort of interlocutors to linguistically and non-verbally align through time is a marker of stance: it convey stance of mu- tual understanding, attention, agreement, interest and pleas- antness (Louwerse, Dale, Bard, & Jeuniaux, 2012). When two persons participate in a discussion, they not only exchange the concepts and ideas they are discussing, they also express attitudes, feelings and commitments regarding their partner: they express interpersonal stances. Endowed with backchannel model, several virtual agents are able to react to their partners’ behaviour through their non-verbal behaviour. In this paper, we go beyond this approach, proposing and test- ing a model that enables agents to express a dyadic stance, marker of effective communication: agents will naturally co- construct a shared dyadic stance if and only if their interper- sonal stance is reciprocally positive. We focus on smile, which conveys interpersonal stance and is a particularly efficient sig- nal for co-regulation of communication. With this model, a virtual agent, only capable to control its own individual pa- rameters, can, in fact, modulate and control the dyadic stance appearing when it interacts with its partner. The evaluation of the model through a user perceptive study has enabled us to validate that the dyadic stance is significantly perceived as more positive (mutual understanding, attention, agreement, in- terest, pleasantness) when reinforcement of smile is reciprocal. Keywords: dyadic interaction; interactive behaviours; dynam- ical systems; dyadic stance; smile; virtual agent; The description of stance has not only evolved toward a distinction between individual and co-constructed stance. It has also evolved from a uniquely linguistic description (DuBois, 2007; Kielsing, 2009) to a description implying in- teractants’ Non-Verbal Behaviours (NVBs) (Scherer, 2005; Prepin et al., 2012). The non-verbal behaviours participate in maintaining contact between interactants and facilitate ver- bal exchange: they are an integral part of the communication process (Paradowski, 2011). NVBs actively convey stances through paralinguistic features (such as tone of voice, dura- tion, loudness or prosody), facial expressions, and postures (Chindamo et al., 2012). Introduction When we consider verbal communication, interlocutors not only exchange the concepts and ideas which constitute the subject of their discussion, they also express feelings, judge- ments or commitments regarding this subject. This “atti- tude which, for some time, is expressed and sustained in- teractively in communication, in a unimodal or multi-modal manner” corresponds to the stance: Chindamo, Allwood, and Ahls´en (2012) review the existing definitions and descriptions of stance; they show how these definitions have evolved from a focus on individual expression of stance to a more interac- tive and social description. Individual stance refers to two types of stance: epistemic and interpersonal stance (Kielsing, 2009). The epistemic stance is the expression of the rela- tionship of a person to his/her own talk (for instance “cer- tain”). The interpersonal stances convey the relationship of a person to the interlocutor (for example “warm” or “polite”). Moreover, during an interaction, “stances are constructed across turns rather than being the product of a single turn” (Chindamo et al., 2012). When interactants with individ- ual epistemic and interpersonal stances are put in presence, Models of interactive agents have mainly explored the au- tomatic generation of virtual agent’s behaviour aligned on the interlocutor’s behaviour. Buschmeier, S., and Kopp (2010) combine a model of lexical alignment with a model gener- ating behaviours based on linguistic information. Bailenson and Yee (2005) model the NVBs alignment of a speaking virtual agent to a listening human. They propose a Digital Chameleon (in reference to the Chameleon effect described by Chartrand and Bargh (1999)). Bevacqua, Hyniewska, and Pelachaud (2010) model the NVBs alignment of a listen- ing agent to a speaking human: they propose a model of backchannels, i.e. NVBs aligned in time and nature, to fa- cilitate human users to tell a story. All these models focus on the adaptation of the virtual agent to its interlocutor, but do not take into account the recip- rocal adaptation of this interlocutor: behaviours are computed in reaction to partner’s behaviour, but not in interaction with partner’s behaviour; the dynamical coupling associated to the mutual engagement of interactants is not modelled, and crit- ical parameters of interaction such as synchrony and align- ment which appear as side effects of this coupling (Paolo,

[1]  Catherine Pelachaud,et al.  Live generation of interactive non-verbal behaviours , 2012, AAMAS.

[2]  Michał B. Paradowski,et al.  The Embodied Language: Why Language Should Not Be Conceived of in Abstraction from the Brain and Body, and the Consequences for Robotics , 2011 .

[3]  Elisabeth Ahlsén,et al.  Some Suggestions for the Study of Stance in Communication , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.

[4]  P. Ekman,et al.  Felt, false, and miserable smiles , 1982 .

[5]  K. Scherer What are emotions? And how can they be measured? , 2005 .

[6]  A. Jaffe Stance: Sociolinguistic Perspectives , 2009 .

[7]  Radoslaw Niewiadomski,et al.  How a Virtual Agent Should Smile? - Morphological and Dynamic Characteristics of Virtual Agent's Smiles , 2010, IVA.

[8]  Catherine Pelachaud,et al.  Mutual Stance Building in Dyad of Virtual Agents: Smile Alignment and Synchronisation , 2012, 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing.

[9]  Catherine Pelachaud,et al.  Effect of time delays on agents' interaction dynamics , 2011, AAMAS.

[10]  J. Bailenson,et al.  Digital Chameleons , 2005, Psychological science.

[11]  Philippe Gaussier,et al.  Avoiding the world model trap: An acting robot does not need to be so smart! , 1994 .

[12]  J. Nadel,et al.  Experiencing contingency and agency : First step toward self-understanding in making a mind? , 2005 .

[13]  John W. Du Bois The stance triangle , 2007 .

[14]  Marc Schröder,et al.  The SEMAINE API: Towards a Standards-Based Framework for Building Emotion-Oriented Systems , 2010, Adv. Hum. Comput. Interact..

[15]  Stefan Kopp,et al.  Adaptive expressiveness: virtual conversational agents that can align to their interaction partner , 2010, AAMAS.

[16]  E. Thelen,et al.  Using dynamic field theory to rethink infant habituation. , 2006, Psychological review.

[17]  M. Auvray,et al.  Perceptual interactions in a minimalist virtual environment , 2009 .

[18]  Catherine Pelachaud,et al.  Basics of Intersubjectivity Dynamics: Model of Synchrony Emergence When Dialogue Partners Understand Each Other , 2011, ICAART.

[19]  T. Chartrand,et al.  The chameleon effect: the perception-behavior link and social interaction. , 1999, Journal of personality and social psychology.

[20]  Elisabetta Bevacqua Positive influence of smile backchannels in ECAs , 2010 .

[21]  M. Rohde,et al.  Sensitivity to social contingency or stability of interaction? Modelling the dynamics of perceptual crossing , 2008 .

[22]  H. D. Jaegher,et al.  Enactive intersubjectivity: Participatory sense-making and mutual incorporation , 2009 .

[23]  Rick Dale,et al.  Behavior Matching in Multimodal Communication Is Synchronized , 2012, Cogn. Sci..