Using Linguistic Alignment to Enhance Learning Experience with Pedagogical Agents: The Special Case of Dialect

Empirical research showed that verbal and nonverbal alignment occurs in HCI in the same way as in HHI [1-3]. Against the background of similarity attraction [4], a “we-feeling” within dialect-origin [5] and different investigations regarding speaking variations [6,7], the present study analyses the effect of the dialectical language usage of a virtual pedagogical agent within a tutoring setting and the ramifications for the learning situation. An experimental study with a between subject design (N=47) was conducted in which the virtual interlocutor explained and subsequently questioned the subjects about medical topics in either dialect or High German (via Wizard-of-Oz-scenario). The results show that linguistic alignment occurs in both conditions, but even more in interaction with the High German-speaking agent. Furthermore the dialect-using agent was rated as more likable while there were no effects with regard to social presence. Implications for theory and development are discussed.

[1]  R. Moreno,et al.  Students' choice of animated pedagogical agents in science learning: A test of the similarity-attraction hypothesis on gender and ethnicity , 2006 .

[2]  Francisco Iacobelli,et al.  Ethnic Identity and Engagement in Embodied Conversational Agents , 2007, IVA.

[3]  S. Brennan,et al.  How Listeners Compensate for Disfluencies in Spontaneous Speech , 2001 .

[4]  H. Giles,et al.  Contexts of Accommodation: Developments in Applied Sociolinguistics , 2010 .

[5]  G. Bente,et al.  Personalizing e-Learning. The Social Effects of Pedagogical Agents , 2010 .

[6]  C. Nass,et al.  Truth is beauty: researching embodied conversational agents , 2001 .

[7]  Stefan Kopp,et al.  Smile and the world will smile with you - The effects of a virtual agent's smile on users' evaluation and behavior , 2013, Int. J. Hum. Comput. Stud..

[8]  Aaron Powers,et al.  Matching robot appearance and behavior to tasks to improve human-robot cooperation , 2003, The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003..

[9]  Timothy W. Bickmore,et al.  Should Agents Speak Like, um, Humans? The Use of Conversational Fillers by Virtual Agents , 2009, IVA.

[10]  M. Pickering,et al.  The role of beliefs in lexical alignment: Evidence from dialogs with humans and computers , 2011, Cognition.

[11]  Frank Biocca,et al.  The Effect of the Agency and Anthropomorphism on Users' Sense of Telepresence, Copresence, and Social Presence in Virtual Environments , 2003, Presence: Teleoperators & Virtual Environments.

[12]  R. Bollet,et al.  Personalizing E-Learning , 2002 .

[13]  H. Giles,et al.  Accommodation theory: Communication, context, and consequence. , 1991 .

[14]  Nicole C. Krämer,et al.  Quid Pro Quo? Reciprocal Self-disclosure and Communicative Accomodation towards a Virtual Interviewer , 2011, IVA.

[15]  Nicole C. Krämer,et al.  Great minds think alike. Experimental study on lexical alignment in human-agent interaction , 2013, i-com.

[16]  Susan E. Brennan,et al.  LEXICAL ENTRAINMENT IN SPONTANEOUS DIALOG , 1996 .

[17]  M. Press Presence : teleoperators and virtual environments. , 2014 .

[18]  Anna K. Kuhlen,et al.  Language in Dialogue: When Confederates Might Be Hazardous to Your Data , 2022 .

[19]  M. Pickering,et al.  Linguistic alignment between people and computers , 2010 .

[20]  Clifford Nass,et al.  Maximized Modality or constrained consistency? , 1999, AVSP.

[21]  J. Cassell,et al.  Embodied conversational agents , 2000 .

[22]  S. Garrod,et al.  Saying what you mean in dialogue: A study in conceptual and semantic co-ordination , 1987, Cognition.

[23]  J. Kaplan,et al.  Great minds think alike , 1993, Current Biology.