Task relevance predicts gaze in videos of real moving scenes

Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382–390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the ‘target’ for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.

[1]  O. Meur,et al.  Predicting visual fixations on video based on low-level visual features , 2007, Vision Research.

[2]  Jochen J. Steil,et al.  Where to Look Next? Combining Static and Dynamic Proto-objects in a TVA-based Model of Visual Attention , 2010, Cognitive Computation.

[3]  Laurent Itti,et al.  Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Derrick J. Parkhurst,et al.  Scene content selected by active vision. , 2003, Spatial vision.

[5]  L. Itti Author address: , 1999 .

[6]  W. Boot,et al.  Age-related differences in visual search in dynamic displays. , 2007, Psychology and aging.

[7]  David N. Lee,et al.  Where we look when we steer , 1994, Nature.

[8]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[9]  Jochen J. Steil,et al.  Integrating Inhomogeneous Processing and Proto-object Formation in a Computational Model of Visual Attention , 2009, Human Centered Robot Systems, Cognition, Interaction, Technology.

[10]  Gregory J. Zelinsky,et al.  Visual search is guided to categorically-defined targets , 2009, Vision Research.

[11]  Iain D. Gilchrist,et al.  Visual correlates of fixation selection: effects of scale and time , 2005, Vision Research.

[12]  M. Land,et al.  The Roles of Vision and Eye Movements in the Control of Activities of Daily Living , 1998, Perception.

[13]  P. de Graef,et al.  Local and global contextual constraints on the identification of objects in scenes. , 1992, Canadian journal of psychology.

[14]  Asha Iyer,et al.  Components of bottom-up gaze allocation in natural images , 2005, Vision Research.

[15]  Martin Eimer,et al.  Top-down search strategies determine attentional capture in visual search: Behavioral and electrophysiological evidence , 2010, Attention, perception & psychophysics.

[16]  M. Land,et al.  The effects of skill on the eye–hand span during musical sight–reading , 1999, Proceedings of the Royal Society of London. Series B: Biological Sciences.

[17]  Michael L. Mack,et al.  VISUAL SALIENCY DOES NOT ACCOUNT FOR EYE MOVEMENTS DURING VISUAL SEARCH IN REAL-WORLD SCENES , 2007 .

[18]  Derrick J. Parkhurst,et al.  Modeling the role of salience in the allocation of overt visual attention , 2002, Vision Research.

[19]  Nao Ninomiya,et al.  The 10th anniversary of journal of visualization , 2007, J. Vis..

[20]  Gregory J. Zelinsky,et al.  Scene context guides eye movements during visual search , 2006, Vision Research.

[21]  Robert A. Marino,et al.  Free viewing of dynamic stimuli by humans and monkeys. , 2009, Journal of vision.

[22]  G. Zelinsky,et al.  Short article: Search guidance is proportional to the categorical specificity of a target cue , 2009, Quarterly journal of experimental psychology.

[23]  Krista A. Ehinger,et al.  Modelling search for people in 900 scenes: A combined source model of eye guidance , 2009 .

[24]  George L. Malcolm,et al.  Searching in the dark: Cognitive relevance drives attention in real-world scenes , 2009, Psychonomic bulletin & review.

[25]  Xin Chen,et al.  Real-world visual search is dominated by top-down guidance , 2006, Vision Research.

[26]  M. Carrasco,et al.  Sustained and transient covert attention enhance the signal via different contrast response functions , 2006, Vision Research.

[27]  J. Wolfe,et al.  Guided Search 2.0 A revised model of visual search , 1994, Psychonomic bulletin & review.

[28]  Robin L. Hill,et al.  Eye movements : a window on mind and brain , 2007 .

[29]  A. L. Yarbus,et al.  Eye Movements and Vision , 1967, Springer US.

[30]  Olaf Blanke,et al.  Gravity and observer's body orientation influence the visual perception of human body postures. , 2009, Journal of vision.

[31]  Roland J. Baddeley,et al.  The nature of the visual representations involved in eye movements when walking down the street , 2009 .

[32]  A. Mizuno,et al.  A change of the leading player in flow Visualization technique , 2006, J. Vis..

[33]  Z W Pylyshyn,et al.  Tracking multiple independent targets: evidence for a parallel tracking mechanism. , 1988, Spatial vision.

[34]  Jillian H. Fecteau,et al.  Salience, relevance, and firing: a priority map for target selection , 2006, Trends in Cognitive Sciences.

[35]  George L. Malcolm,et al.  Combining top-down processes to guide eye movements during real-world scene search. , 2010, Journal of vision.

[36]  A. L. I︠A︡rbus Eye Movements and Vision , 1967 .