Referential Gaze Makes a Difference in Spoken Language Comprehension: Human Speaker vs. Virtual Agent Listener Gaze

An interlocutor's referential gaze is of great importance in faceto-face communication as it facilitates spoken language comprehension. People are also able to exploit virtual agent gaze in interactions. Our study addressed effects of human speaker gaze vs. virtual listener gaze on reaction times, accuracy and eye movements. We manipulated: (1) whether the human speaker uttering the sentence was visible, (2) whether the agent listener was present and (3) whether the template following each video matched the scene. Participants saw videos in which a static scene depicting three characters was visible on a screen. We recorded participants ' eye movements as they listened to German SVO sentences describing an interaction between two of these three characters. After each trial a template schematically depicting three characters and their interaction appeared on screen. Participants verified congruence between sentence and template. Participants solved the matching task very well across all conditions. They responded faster to matches than mismatches between sentence and template. Participants were slower when the agent was present. Eye movement results suggest that during the NP2 region participants tended to look at the NP2 referent to a greater extent when the speaker was present compared to the other conditions.

[1]  Pia Knoeferle,et al.  Effects of speaker gaze on spoken language comprehension: task matters , 2011, CogSci.

[2]  Sean Andrist,et al.  Conversational Gaze Aversion for Virtual Agents , 2013, IVA.

[3]  Christopher A. Dickinson,et al.  Coordinating cognition: The costs and benefits of shared gaze during collaborative search , 2008, Cognition.

[4]  Stefan Kopp,et al.  Towards a Common Framework for Multimodal Generation: The Behavior Markup Language , 2006, IVA.

[5]  Jean-Claude Martin,et al.  Joint Attention Simulation Using Eye-Tracking and Virtual Humans , 2014, IEEE Transactions on Affective Computing.

[6]  Robin Wolff,et al.  Eye Tracking for Avatar Eye Gaze Control During Object-Focused Multiparty Interaction in Immersive Collaborative Virtual Environments , 2009, 2009 IEEE Virtual Reality Conference.

[7]  Ning Wang,et al.  Don't just stare at me! , 2010, CHI.

[8]  Brent Lance,et al.  The Expressive Gaze Model: Using Gaze to Express Emotion , 2010, IEEE Computer Graphics and Applications.

[9]  James C. Lester,et al.  Face-to-Face Interaction with Pedagogical Agents, Twenty Years Later , 2016, International Journal of Artificial Intelligence in Education.

[10]  Norman I. Badler,et al.  A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception , 2015, Comput. Graph. Forum.

[11]  Pia Knoeferle,et al.  Can Speaker Gaze Modulate Syntactic Structuring and Thematic Role Assignment during Spoken Sentence Comprehension? , 2012, Front. Psychology.

[12]  Stefan Kopp,et al.  AsapRealizer 2.0: The Next Steps in Fluent Behavior Realization for ECAs , 2014, IVA.

[13]  Gérard Bailly,et al.  Face-to-face interaction with a conversationnal agent: eye-gaze and deixis , 2005 .

[14]  B. Breitmeyer,et al.  Mechanisms of visual attention revealed by saccadic eye movements , 1987, Neuropsychologia.

[15]  Mark H. Johnson,et al.  Eye contact detection in humans from birth , 2002, Proceedings of the National Academy of Sciences of the United States of America.

[16]  Matthew W. Crocker,et al.  The influence of speaker gaze on listener comprehension: Contrasting visual versus intentional accounts , 2014, Cognition.

[17]  S. Brennan,et al.  Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation , 2007 .

[18]  Peter Wittenburg,et al.  ELAN: a Professional Framework for Multimodality Research , 2006, LREC.

[19]  M. V. von Grünau,et al.  The Detection of Gaze Direction: A Stare-In-The-Crowd Effect , 1995, Perception.

[20]  Robin J. S. Sloan,et al.  Using virtual agents to cue observer attention , 2010 .

[21]  Stefan Kopp,et al.  A Second Chance to Make a First Impression? How Appearance and Nonverbal Behavior Affect Perceived Warmth and Competence of Virtual Agents over Time , 2012, IVA.

[22]  Marc Cavazza,et al.  Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application , 2010, ICMI-MLMI '10.

[23]  Sean Andrist,et al.  Designing effective gaze mechanisms for virtual agents , 2012, CHI.

[24]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[25]  M. Crocker,et al.  Investigating joint attention mechanisms through spoken human–robot interaction , 2011, Cognition.

[26]  Peter Ford Dominey,et al.  I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation , 2012, Front. Neurorobot..

[27]  Elisabeth André,et al.  Where Do They Look? Gaze Behaviors of Multiple Users Interacting with an Embodied Conversational Agent , 2005, IVA.