See what i'm saying?: using Dyadic Mobile Eye tracking to study collaborative reference

To create intelligent collaborative systems able to anticipate and react appropriately to users' needs and actions, it is crucial to develop a detailed understanding of the process of collaborative reference. We developed a dyadic eye tracking methodology and metrics for studying the multimodal process of reference, and applied these techniques in an experiment using a naturalistic conversation elicitation task. We found systematic differences in linguistic and visual coordination between pairs of mobile and seated participants. Our results detail measurable interactions between referential form, gaze, and spatial context and can be used to enable the development of more natural collaborative user interfaces.

[1]  Vinay Sharma,et al.  Utilizing Visual Attention for Cross-Modal Coreference Interpretation , 2005, CONTEXT.

[2]  Mary Beth Rosson,et al.  Awareness and teamwork in computer-supported collaborations , 2006, Interact. Comput..

[3]  Rong Jin,et al.  Linguistic theories in efficient multimodal reference resolution: an empirical investigation , 2005, IUI.

[4]  Etsuko Yoshida,et al.  8. Referring as a collaborative process in discourse , 2011 .

[5]  Jeff B. Pelz,et al.  Extended tasks elicit complex eye movement patterns , 2000, ETRA.

[6]  Susan R. Fussell,et al.  Where do helpers look?: gaze targets during collaborative physical tasks , 2003, CHI Extended Abstracts.

[7]  David Traum,et al.  Computational Models of Grounding in Collaborative Systems , 1999 .

[8]  Robin L. Hill,et al.  Referring and gaze alignment: accessibility is alive and well in situated dialogue , 2009 .

[9]  Christian Heath,et al.  Embodied reference: A study of deixis in workplace interaction , 2000 .

[10]  Roger M. Cooper,et al.  The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. , 1974 .

[11]  Susan R. Fussell,et al.  Effects of task properties, partner actions, and message content on eye gaze patterns in a collaborative task , 2005, CHI.

[12]  Yukiko I. Nakano,et al.  Towards a Model of Face-to-Face Grounding , 2003, ACL.

[13]  M. Argyle,et al.  Gaze and Mutual Gaze , 1994, British Journal of Psychiatry.

[14]  Mira Ariel Referring and accessibility , 1988, Journal of Linguistics.

[15]  Jean-Claude Martin,et al.  On Representing Salience and Reference in Multimodal Human-Computer Interaction , 2003 .

[16]  S. Boker,et al.  Windowed cross-correlation and peak picking for the analysis of variability in the association between behavioral time series. , 2002, Psychological methods.

[17]  Gregory Ward,et al.  Syntactic Form and Discourse Accessibility , 2005 .

[18]  Pierre Dillenbourg,et al.  The effects of explicit referencing in distance problem solving over shared maps , 2007, GROUP.

[19]  Daniel C. Richardson,et al.  Looking To Understand: The Coupling Between Speakers' and Listeners' Eye Movements and Its Relationship to Discourse Comprehension , 2005, Cogn. Sci..

[20]  Susan R. Fussell,et al.  Exploring adaptive dialogue based on a robot's awareness of human gaze and task progress , 2007, 2007 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[21]  Joyce Yue Chai,et al.  What's in a gaze?: the role of eye-gaze in reference resolution in multimodal conversational interfaces , 2008, IUI '08.

[22]  M. Land,et al.  The Roles of Vision and Eye Movements in the Control of Activities of Daily Living , 1998, Perception.

[23]  A. Bangerter,et al.  Using Pointing and Describing to Achieve Joint Focus of Attention in Dialogue , 2004, Psychological science.

[24]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[25]  H. H. Clark,et al.  Referring as a collaborative process , 1986, Cognition.

[26]  J. Cassell,et al.  Towards a model of technology and literacy development: Story listening systems , 2004 .

[27]  Jeanette K. Gundel,et al.  Cognitive Status and the form of Referring Expressions in Discourse , 1993, The Oxford Handbook of Reference.

[28]  Julie C. Sedivy,et al.  Subject Terms: Linguistics Language Eyes & eyesight Cognition & reasoning , 1995 .

[29]  David DeVault,et al.  An Information-State Approach to Collaborative Reference , 2005, ACL.

[30]  Robert E. Kraut,et al.  Action as language in a shared visual space , 2004, CSCW.

[31]  Eog Goggles It's in Your Eyes-Towards Context-Awareness and Mobile HCI Using Wearable EOG Goggles , 2008 .

[32]  Carolyn Penstein Rosé,et al.  Modeling the impact of shared visual information on collaborative reference , 2007, CHI.

[33]  D. Barr Analyzing ‘visual world’ eyetracking data using multilevel logistic regression , 2008 .

[34]  Candace L. Sidner,et al.  Explorations in engagement for humans and robots , 2005, Artif. Intell..

[35]  Abhishek Ranjan,et al.  Dynamic shared visual spaces: experimenting with automatic camera control in a remote repair task , 2007, CHI.