Visual attention guided eye movements for 360 degree images

Common computational models for visual attention guided saccadic eye movements are mostly used into 2D images, and the fixation density map is generated for one image based on human eyes gazing fixations. However, most of the traditional models are not suitable for 360 degree images. In this paper, we viewed the fixations as salience to analyze the statistical properties of human saccadic behavior. It can be found that human vision prefers to concentrate the regions with abundant information, and the statistical properties are closed to Super Gaussian distribution properties. Based on these, we proposed a novel approach to stimulate the scan-paths of human eye movements in view of the 360 degree images. Then we extracted high frequency components of the regions, which satisfy the Super Gaussian distribution. Here, projection pursuit is used for selecting the components, and the location with maximum of the component is chosen as eyes gazing fixation. In order to better demonstrate the characteristics of human eye movements, we take the spatial-temporal information of eye gazing fixations into consideration. For each scan-path, we record the position of every fixation and the duration of two adjacent fixations. Besides, to show the difference of different observers' visual attention, there are various scan-paths of a same image produced by our model. Compared with the ground truth, the scan-paths which predicted by our method imply a well match with the scan-paths of ground truth.

[1]  Marcus Nyström,et al.  A vector-based, multidimensional scanpath similarity measure , 2010, ETRA.

[2]  Robin Sibson,et al.  What is projection pursuit , 1987 .

[3]  D. Ballard,et al.  Eye guidance in natural vision: reinterpreting salience. , 2011, Journal of vision.

[4]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.

[5]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[6]  Rongrong Ji,et al.  What are we looking for: Towards statistical modeling of saccadic eye movements and visual saliency , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[8]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[9]  Yuan Yao,et al.  Simulating human saccadic scanpaths on natural images , 2011, CVPR 2011.

[10]  Ali Borji,et al.  Exploiting local and global patch rarities for saliency detection , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Yu Fu,et al.  Visual saliency detection by spatially weighted dissimilarity , 2011, CVPR 2011.

[12]  T. Foulsham,et al.  What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition. , 2008, Journal of vision.

[13]  J. Kruskal TOWARD A PRACTICAL METHOD WHICH HELPS UNCOVER THE STRUCTURE OF A SET OF MULTIVARIATE OBSERVATIONS BY FINDING THE LINEAR TRANSFORMATION WHICH OPTIMIZES A NEW “INDEX OF CONDENSATION” , 1969 .

[14]  Rongrong Ji,et al.  Toward Statistical Modeling of Saccadic Eye-Movement and Visual Saliency , 2014, IEEE Transactions on Image Processing.

[15]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[16]  Nuno Vasconcelos,et al.  The discriminant center-surround hypothesis for bottom-up saliency , 2007, NIPS.