Temporal evolution of the central fixation bias in scene viewing.

When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image-a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.

[1]  Michael A. Arbib,et al.  Attention and Scene Understanding , 2005 .

[2]  Ralf Engbert,et al.  Microsaccades are triggered by low retinal image slip. , 2006, Proceedings of the National Academy of Sciences of the United States of America.

[3]  Antonio Torralba,et al.  Modeling global scene factors in attention. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[4]  S. Hillyard,et al.  Identification of early visual evoked potential generators by retinotopic and topographic analyses , 1994 .

[5]  Matthias Bethge,et al.  Information-theoretic model comparison unifies saliency metrics , 2015, Proceedings of the National Academy of Sciences.

[6]  David L. Sheinberg,et al.  Fixational eye movements are not affected by abrupt onsets that capture attention , 2002, Vision Research.

[7]  S. Hillyard,et al.  Cortical sources of the early components of the visual evoked potential , 2002, Human brain mapping.

[8]  F. Vitu,et al.  Eye movements in reading isolated words: evidence for strong biases towards the center of the screen , 2004, Vision Research.

[9]  D. Bates,et al.  Fitting Linear Mixed-Effects Models Using lme4 , 2014, 1406.5823.

[10]  Patrick Le Callet,et al.  A coherent computational approach to model bottom-up visual attention , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  J. Henderson,et al.  The influence of color on the perception of scene gist. , 2008, Journal of experimental psychology. Human perception and performance.

[12]  John K. Tsotsos,et al.  On computational modeling of visual saliency: Examining what’s right, and what’s left , 2015, Vision Research.

[13]  Ralf Engbert,et al.  Microsaccades uncover the orientation of covert attention , 2003, Vision Research.

[14]  L. Itti,et al.  Quantifying center bias of observers in free viewing of dynamic natural scenes. , 2009, Journal of vision.

[15]  Erik Blaser,et al.  The accuracy and precision of saccades to small and large targets , 1995, Vision Research.

[16]  Brian D. Ripley,et al.  Modern Applied Statistics with S Fourth edition , 2002 .

[17]  John M Henderson,et al.  The time course of initial scene processing for eye movement guidance in natural scene search. , 2010, Journal of vision.

[18]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[19]  J. Henderson,et al.  Object-based attentional selection in scene viewing. , 2010, Journal of vision.

[20]  M. Tinker How People Look at Pictures. , 1936 .

[21]  Iain D Gilchrist,et al.  A population coding account for systematic variation in saccadic dead time. , 2007, Journal of neurophysiology.

[22]  Christoph Scheepers,et al.  Face, body, and center of gravity mediate person detection in natural scenes. , 2010, Journal of experimental psychology. Human perception and performance.

[23]  P. Brockhoff,et al.  lmerTest: Tests for random and fixed effects for linear mixed effect models (lmer objects of lme4 package) , 2014 .

[24]  Brian D. Ripley,et al.  Modern applied statistics with S, 4th Edition , 2002, Statistics and computing.

[25]  D G Pelli,et al.  The VideoToolbox software for visual psychophysics: transforming numbers into movies. , 1997, Spatial vision.

[26]  D. Ballard,et al.  Eye guidance in natural vision: reinterpreting salience. , 2011, Journal of vision.

[27]  Christof Koch,et al.  Predicting human gaze using low-level saliency combined with face detection , 2007, NIPS.

[28]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[29]  Benjamin W. Tatler,et al.  Systematic tendencies in scene viewing , 2008 .

[30]  Simon Barthelmé,et al.  Spatial statistics and attentional dynamics in scene viewing. , 2014, Journal of vision.

[31]  M. Bindemann Scene and screen center bias early eye movements in scene viewing , 2010, Vision Research.

[32]  W. Geisler,et al.  Optimal Eye Movement Strategies in Visual Search ( Supplement ) , 2005 .

[33]  Peter König,et al.  Measures and Limits of Models of Fixation Selection , 2011, PloS one.

[34]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[35]  Benjamin W Tatler,et al.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. , 2007, Journal of vision.

[36]  Heiko H. Schütt,et al.  Likelihood-Based Parameter Estimation and Comparison of Dynamical Cognitive Models , 2016, Psychological review.

[37]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[38]  D H Brainard,et al.  The Psychophysics Toolbox. , 1997, Spatial vision.

[39]  J. Henderson,et al.  The effects of semantic consistency on eye movements during complex scene viewing , 1999 .

[40]  B. Tatler,et al.  The prominence of behavioural biases in eye guidance , 2009 .

[41]  Frouke Hermens,et al.  The Central Bias in Day-to-Day Viewing , 2016 .

[42]  Thierry Baccino,et al.  Methods for comparing scanpaths and saliency maps: strengths and weaknesses , 2012, Behavior Research Methods.

[43]  Frans W Cornelissen,et al.  The Eyelink Toolbox: Eye tracking with MATLAB and the Psychophysics Toolbox , 2002, Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc.

[44]  Dwight J. Kravitz,et al.  Start Position Strongly Influences Fixation Patterns during Face Processing: Difficulties with Eye Movements as a Measure of Information Use , 2012, PloS one.

[45]  Felix A. Wichmann,et al.  Influence of initial fixation position in scene viewing , 2016, Vision Research.

[46]  L. Kaufman,et al.  “Center-of-gravity” Tendencies for fixations and flow patterns , 1969 .

[47]  Zhi Liu,et al.  Saccadic model of eye movements for free-viewing condition , 2015, Vision Research.

[48]  Wilson S. Geisler,et al.  Optimal eye movement strategies in visual search , 2005, Nature.

[49]  Adrian Baddeley,et al.  spatstat: An R Package for Analyzing Spatial Point Patterns , 2005 .

[50]  M. Kenward,et al.  An Introduction to the Bootstrap , 2007 .

[51]  Raymond Klein,et al.  Inhibition of return , 2000, Trends in Cognitive Sciences.

[52]  R. Duncan Luce,et al.  Individual Choice Behavior: A Theoretical Analysis , 1979 .

[53]  N. Mackworth,et al.  Cognitive determinants of fixation location during picture viewing. , 1978, Journal of experimental psychology. Human perception and performance.

[54]  Denis Cousineau,et al.  Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson's method , 2005 .

[55]  Hadley Wickham,et al.  ggplot2 - Elegant Graphics for Data Analysis (2nd Edition) , 2017 .

[56]  P. König,et al.  Gaze allocation in natural stimuli: Comparing free exploration to head-fixed viewing conditions , 2009 .

[57]  Laurence R. Harris,et al.  Small Saccades to Double-Stepped Targets Moving in Two Dimensions , 1984 .

[58]  Dirk P. Kroese,et al.  Kernel density estimation via diffusion , 2010, 1011.2602.

[59]  D. Ballard,et al.  Eye movements in natural behavior , 2005, Trends in Cognitive Sciences.

[60]  D. Munoz,et al.  Look away: the anti-saccade task and the voluntary control of eye movement , 2004, Nature Reviews Neuroscience.

[61]  D. E. Irwin,et al.  Our Eyes do Not Always Go Where we Want Them to Go: Capture of the Eyes by New Objects , 1998 .

[62]  Ali Borji,et al.  Salient Object Detection: A Benchmark , 2015, IEEE Transactions on Image Processing.

[63]  Jan Drewes,et al.  Animal detection in natural scenes: critical features revisited. , 2010, Journal of vision.

[64]  C. PillersDobler,et al.  Mathematical Statistics: Basic Ideas and Selected Topics (vol. 1, 2nd ed.) , 2002 .

[65]  F. Ottes,et al.  Latency dependence of colour-based target vs nontarget discrimination by the saccadic system , 1985, Vision Research.

[66]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[67]  B. Tatler,et al.  Deriving an appropriate baseline for describing fixation behaviour , 2014, Vision Research.

[68]  Denis G. Pelli,et al.  ECVP '07 Abstracts , 2007, Perception.