Minimal sequential gaze models for inferring walkers' tasks

Eye movements in extended sequential behavior are known to reflect task demands much more than low-level feature saliency. However, the more naturalistic the task is the more difficult it becomes to establish what cognitive processes a particular task elicits moment by moment. Here we ask the question, which sequential model is required to capture gaze sequences so that the ongoing task can be inferred reliably. Specifically, we consider eye movements of human subjects navigating a walkway while avoiding obstacles and approaching targets in a virtual environment. We show that Hidden-Markov Models, which have been used extensively in modeling human sequential behavior, can be augmented with few state variables describing the egocentric position of subjects relative to objects in the environment to dramatically increase successful classification of the ongoing task and to generate gaze sequences, that are very close to those observed in human subjects.

[1]  Matthew F. Peterson,et al.  Looking just below the eyes is optimal across face recognition tasks , 2012, Proceedings of the National Academy of Sciences.

[2]  Mary Hayhoe,et al.  Saccades to future ball location reveal memory-based prediction in a virtual-reality interception task. , 2013, Journal of vision.

[3]  Antonio Torralba,et al.  Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. , 2006, Psychological review.

[4]  R. Johansson,et al.  Eye–Hand Coordination in Object Manipulation , 2001, The Journal of Neuroscience.

[5]  Constantin A Rothkopf,et al.  Image statistics at the point of gaze during human navigation , 2009, Visual Neuroscience.

[6]  R. Johansson,et al.  Action plans used in action observation , 2003, Nature.

[7]  D. Ballard,et al.  Eye movements in natural behavior , 2005, Trends in Cognitive Sciences.

[8]  Pietro Perona,et al.  Optimal reward harvesting in complex perceptual environments , 2010, Proceedings of the National Academy of Sciences.

[9]  Mary M Hayhoe,et al.  Task and context determine where you look. , 2016, Journal of vision.

[10]  J. Henderson Human gaze control during real-world scene perception , 2003, Trends in Cognitive Sciences.

[11]  Christopher M. Brown,et al.  Controlling eye movements with hidden Markov models , 2004, International Journal of Computer Vision.

[12]  Scott Cheng-Hsin Yang,et al.  Active sensing in the categorization of visual patterns , 2016, eLife.

[13]  Wilson S. Geisler,et al.  Optimal eye movement strategies in visual search , 2005, Nature.

[14]  Ali Borji,et al.  What stands out in a scene? A study of human explicit saliency judgment , 2013, Vision Research.

[15]  B. Tatler,et al.  Looking and Acting: Vision and eye movements in natural behaviour , 2009 .

[16]  Andrew Liu,et al.  MODELING AND PREDICTION OF HUMAN DRIVER BEHAVIOR , 2001 .

[17]  Alexander H. Waibel,et al.  From Gaze to Focus of Attention , 1999, VISUAL.

[18]  A. L. Yarbus,et al.  Eye Movements and Vision , 1967, Springer US.

[19]  P. Subramanian Active Vision: The Psychology of Looking and Seeing , 2006 .

[20]  A. L. I︠A︡rbus Eye Movements and Vision , 1967 .