Attention as an input modality for Post-WIMP interfaces using the viGaze eye tracking framework

Eye tracking is one of the most prominent modalities to track user attention while interacting with computational devices. Today, most of the current eye tracking frameworks focus on tracking the user gaze during website browsing or while performing other tasks and interactions with a digital device. Most frameworks have in common that they do not exploit gaze as an input modality. In this paper we describe the realization of a framework named viGaze. Its main goal is to provide an easy to use framework to exploit the use of eye gaze as an input modality in various contexts. Therefore it provides features to explore explicit and implicit interactions in complex virtual environments by using the eye gaze of a user for various interactions. The viGaze framework is flexible and can be easily extended to incorporate other input modalities typically used in Post-WIMP interfaces such as gesture or foot input. In this paper we describe the key components of our viGaze framework and additionally describe a user study that was conducted to test the framework. The user study took place in a virtual retail environment, which provides a challenging pervasive environment and contains complex interactions that can be supported by gaze. The participants performed two gaze-based interactions with products on virtual shelves and started an interaction cycle between the products and an advertisement monitor placed on the shelf. We demonstrate how gaze can be used in Post-WIMP interfaces to steer the attention of users to certain components of the system. We conclude by discussing the advantages provided through the viGaze framework and highlighting the potentials of gaze-based interaction.

[1]  Johannes Schöning,et al.  Whole Body Interaction with Geospatial Data , 2009, Smart Graphics.

[2]  K. Rayner Eye movements in reading and information processing: 20 years of research. , 1998, Psychological bulletin.

[3]  Oleg V. Komogortsev,et al.  Adaptive eye-gaze-guided interfaces: design & performance evaluation , 2011, CHI EA '11.

[4]  M. Corbetta,et al.  Control of goal-directed and stimulus-driven attention in the brain , 2002, Nature Reviews Neuroscience.

[5]  Patrick Olivier,et al.  How Computing Will Change the Face of Retail , 2011, Computer.

[6]  Peter Gregor,et al.  Older web users' eye movements: experience counts , 2011, CHI.

[7]  S. Treue Visual attention: the where, what, how and why of saliency , 2003, Current Opinion in Neurobiology.

[8]  Martin Raubal,et al.  GeoGazemarks: providing gaze history for the orientation on small display maps , 2012, ICMI '12.

[9]  Alois Ferscha,et al.  Human Computer Confluence , 2006, Universal Access in Ambient Intelligence Environments.

[10]  Robert J. K. Jacob,et al.  Interacting with eye movements in virtual environments , 2000, CHI.

[11]  Albrecht Schmidt,et al.  Gazemarks: gaze-based visual placeholders to ease attention switching , 2010, CHI.

[12]  R. Pieters,et al.  Visual attention during brand choice : The impact of time pressure and task motivation , 1999 .

[13]  Antonio Krüger,et al.  Innovative Retail Laboratory , 2009 .

[14]  Johannes Schöning,et al.  Visual separation in mobile multi-display environments , 2011, UIST.

[15]  Andrew T. Duchowski,et al.  Eye Tracking Methodology - Theory and Practice, Third Edition , 2003 .

[16]  Giovanni Spagnoli,et al.  ceCursor, a contextual eye cursor for general pointing in windows environments , 2010, ETRA.

[17]  Raimund Dachselt,et al.  Investigating gaze-supported multimodal pan and zoom , 2012, ETRA '12.

[18]  Andrew T. Duchowski,et al.  Eye Tracking Methodology: Theory and Practice , 2003, Springer London.

[19]  Shinjiro Kawato,et al.  Just blink your eyes: a head-free gaze tracking system , 2003, CHI Extended Abstracts.

[20]  Ronald A. Rensink,et al.  TO SEE OR NOT TO SEE: The Need for Attention to Perceive Changes in Scenes , 1997 .

[21]  Andreas Dengel,et al.  Text 2.0 , 2010, CHI EA '10.

[22]  Gerhard Tröster,et al.  EyeMote - Towards Context-Aware Gaming Using Eye Movements Recorded from Wearable Electrooculography , 2008, Fun and Games.

[23]  David Salesin,et al.  Gaze-based interaction for semi-automatic photo cropping , 2006, CHI.

[24]  Stephen J. Payne,et al.  Skim reading by satisficing: evidence from eye tracking , 2011, CHI.