Stochastic bottom-up fixation prediction and saccade generation

In this article, a novel technique for fixation prediction and saccade generation will be introduced. The proposed model simulates saccadic eye movement to incorporate the underlying eye movement mechanism into saliency estimation. To this end, a simple salience measure is introduced. Afterwards, we derive a system model for saccade generation and apply it in a stochastic filtering framework. The proposed model will dynamically make a saccade toward the next predicted fixation and produces saliency maps. Evaluation of the proposed model is carried out in terms of saccade generation performance and saliency estimation. Saccade generation evaluation reveals that the proposed model outperforms inhibition of return. Also, experiments signify integration of eye movement mechanism into saliency estimation boosts the results. Finally, comparison with several saliency models shows the proposed model performs aptly.

[1]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[2]  Ali Borji,et al.  Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study , 2013, IEEE Transactions on Image Processing.

[3]  S. Süsstrunk,et al.  Frequency-tuned salient region detection , 2009, CVPR 2009.

[4]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.

[5]  Yin Li,et al.  Visual Saliency Based on Conditional Entropy , 2009, ACCV.

[6]  Yuan Yao,et al.  Simulating human saccadic scanpaths on natural images , 2011, CVPR 2011.

[7]  Junchi Yan,et al.  Visual Saliency Detection via Sparsity Pursuit , 2010, IEEE Signal Processing Letters.

[8]  Liqing Zhang,et al.  Dynamic visual attention: searching for coding length increments , 2008, NIPS.

[9]  Deepu Rajan,et al.  Random Walks on Graphs for Salient Object Detection in Images , 2010, IEEE Transactions on Image Processing.

[10]  Preeti Verghese,et al.  Where to look next? Eye movements reduce local uncertainty. , 2007, Journal of vision.

[11]  Antonio Torralba,et al.  Modeling global scene factors in attention. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.

[12]  I. Hooge,et al.  Inhibition of return is not a foraging facilitator in saccadic search and free viewing , 2005, Vision Research.

[13]  Esa Rahtu,et al.  Segmenting Salient Objects from Images and Videos , 2010, ECCV.

[14]  Thierry Baccino,et al.  Methods for comparing scanpaths and saliency maps: strengths and weaknesses , 2012, Behavior Research Methods.

[15]  Liming Zhang,et al.  A Novel Multiresolution Spatiotemporal Saliency Detection Model and Its Applications in Image and Video Compression , 2010, IEEE Transactions on Image Processing.

[16]  Giuseppe Boccignone,et al.  Modelling gaze shift as a constrained random walk , 2004 .

[17]  Xoana G. Troncoso,et al.  Saccades and microsaccades during visual fixation, exploration, and search: foundations for a common saccadic generator. , 2008, Journal of vision.

[18]  Nuno Vasconcelos,et al.  On the plausibility of the discriminant center-surround hypothesis for visual saliency. , 2008, Journal of vision.

[19]  Sabine Süsstrunk,et al.  Salient Region Detection and Segmentation , 2008, ICVS.

[20]  Peyman Milanfar,et al.  Static and space-time visual saliency detection by self-resemblance. , 2009, Journal of vision.

[21]  Michael Lindenbaum,et al.  Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Christof Koch,et al.  Modeling attention to salient proto-objects , 2006, Neural Networks.

[23]  Nicu Sebe,et al.  In the eye of the beholder: employing statistical analysis and eye tracking for analyzing abstract paintings , 2012, ACM Multimedia.

[24]  Neil J. Gordon,et al.  A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking , 2002, IEEE Trans. Signal Process..

[25]  Lihi Zelnik-Manor,et al.  Context-Aware Saliency Detection , 2012, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  H. Basford,et al.  Optimal eye movement strategies in visual search , 2005 .

[27]  T. Geisel,et al.  Are human scanpaths Levy flights , 1999 .

[28]  Peyman Milanfar,et al.  Nonparametric bottom-up saliency detection by self-resemblance , 2009, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[29]  Pierre Baldi,et al.  Bayesian surprise attracts human attention , 2005, Vision Research.

[30]  D. Hubel,et al.  The role of fixational eye movements in visual perception , 2004, Nature Reviews Neuroscience.

[31]  B. Tatler,et al.  The prominence of behavioural biases in eye guidance , 2009 .

[32]  Nuno Vasconcelos,et al.  Spatiotemporal Saliency in Dynamic Scenes , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[34]  Liming Zhang,et al.  Biological Plausibility of Spectral Domain Approach for Spatiotemporal Visual Saliency , 2008, ICONIP.

[35]  Simone Frintrop,et al.  VOCUS: A Visual Attention System for Object Detection and Goal-Directed Search , 2006, Lecture Notes in Computer Science.

[36]  B. Velichkovsky,et al.  Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration , 2005 .

[37]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[38]  Wen Gao,et al.  Probabilistic Multi-Task Learning for Visual Saliency Estimation in Video , 2010, International Journal of Computer Vision.

[39]  Chun Chen,et al.  What Is the Chance of Happening: A New Way to Predict Where People Look , 2010, ECCV.

[40]  Tim K Marks,et al.  SUN: A Bayesian framework for saliency using natural statistics. , 2008, Journal of vision.

[41]  C. Bundesen A theory of visual attention. , 1990, Psychological review.

[42]  Harish Katti,et al.  An Eye Fixation Database for Saliency Detection in Images , 2010, ECCV.

[43]  Gert Kootstra,et al.  Paying Attention to Symmetry , 2008, BMVC.

[44]  Claudio M. Privitera,et al.  Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[45]  Marcus Nyström,et al.  A vector-based, multidimensional scanpath similarity measure , 2010, ETRA.

[47]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[48]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[49]  Nathalie Guyader,et al.  Modelling Spatio-Temporal Saliency to Predict Gaze Direction for Short Videos , 2009, International Journal of Computer Vision.

[50]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[51]  Wen Gao,et al.  Measuring visual saliency by Site Entropy Rate , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[52]  M. Land,et al.  The Organization of Visually Mediated Actions in a Subject without Eye Movements , 2002, Neurocase.

[53]  Esa Rahtu,et al.  Fast and Efficient Saliency Detection Using Sparse Sampling and Kernel Density Estimation , 2011, SCIA.

[54]  Patrick Le Callet,et al.  A coherent computational approach to model bottom-up visual attention , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[55]  Antón García-Díaz,et al.  Decorrelation and Distinctiveness Provide with Human-Like Saliency , 2009, ACIVS.

[56]  J. Wolfe,et al.  What attributes guide the deployment of visual attention and how do they do it? , 2004, Nature Reviews Neuroscience.

[57]  Iain D. Gilchrist,et al.  Visual correlates of fixation selection: effects of scale and time , 2005, Vision Research.

[58]  Shi-Min Hu,et al.  Global contrast based salient region detection , 2011, CVPR 2011.

[59]  Benjamin W Tatler,et al.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. , 2007, Journal of vision.

[60]  Liming Zhang,et al.  Spatio-temporal Saliency detection using phase spectrum of quaternion fourier transform , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.