A feature-based approach for saliency estimation of omni-directional images

Abstract Omni-directional imaging records the visual information from any direction with respect to a given view-point. It is gaining consumers’ popularity due to fast spreading of low-cost devices both for acquisition and rendering. The possibility to render the whole surrounding space represents a further step towards immersivity, thus providing the user with the illusion of physically being in a virtual environment. The understanding of visual attention mechanisms for these images is a relevant topic for processing, coding, and exploiting such data. In this contribution, a saliency model for omni-directional images is presented. It is based on the combination of low-level and semantic features. The first ones account for texture, viewport saliency, hue and saturation, while the second are used to take into account the impact of the presence of human subjects on the saliency. The proposed model has been tested in the “Salient360! Visual attention modeling for 360° Images” Grand Challenge. The model, the achieved results, and finding/discussions are here presented.

[1]  O. Meur,et al.  Predicting visual fixations on video based on low-level visual features , 2007, Vision Research.

[2]  Ikuya Murakami,et al.  Fixational eye movements and motion perception. , 2006, Progress in brain research.

[3]  Sasa Bodiroza,et al.  Evaluating the Effect of Saliency Detection and Attention Manipulation in Human-Robot Interaction , 2013, Int. J. Soc. Robotics.

[4]  Heinz Hügli,et al.  Visual Attention on the Sphere , 2008, IEEE Transactions on Image Processing.

[5]  Marcus Barkowsky,et al.  The Importance of Visual Attention in Improving the 3D-TV Viewing Experience: Overview and New Perspectives , 2011, IEEE Transactions on Broadcasting.

[6]  Hubert Konik,et al.  A Spatiotemporal Saliency Model for Video Surveillance , 2011, Cognitive Computation.

[7]  Miad Faezipour,et al.  Eye Tracking and Head Movement Detection: A State-of-Art Survey , 2013, IEEE Journal of Translational Engineering in Health and Medicine.

[8]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Alan C. Bovik,et al.  Fast algorithms for foveated video processing , 2003, IEEE Trans. Circuits Syst. Video Technol..

[10]  Patrick Le Callet,et al.  A Dataset of Head and Eye Movements for 360 Degree Images , 2017, MMSys.

[11]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[12]  Weisi Lin,et al.  A Video Saliency Detection Model in Compressed Domain , 2014, IEEE Transactions on Circuits and Systems for Video Technology.

[13]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[14]  Paul A. Viola,et al.  Robust Real-Time Face Detection , 2001, International Journal of Computer Vision.

[15]  Jae-Young Sim,et al.  Saliency detection for panoramic landscape images of outdoor scenes , 2017, J. Vis. Commun. Image Represent..

[16]  Tim K Marks,et al.  SUN: A Bayesian framework for saliency using natural statistics. , 2008, Journal of vision.

[17]  Benjamin W Tatler,et al.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. , 2007, Journal of vision.

[18]  Iain D. Gilchrist,et al.  Visual correlates of fixation selection: effects of scale and time , 2005, Vision Research.

[19]  Patrick Le Callet,et al.  Tone mapping based HDR compression: Does it affect visual experience? , 2014, Signal Process. Image Commun..

[20]  Zhenzhong Chen,et al.  A saliency prediction model on 360 degree images using color dictionary based sparse representation , 2018, Signal Process. Image Commun..

[21]  Chaobing Huang,et al.  Regions of interest extraction from color image based on visual saliency , 2011, The Journal of Supercomputing.

[22]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[23]  Patrick Le Callet,et al.  Toolbox and dataset for the development of saliency and scanpath models for omnidirectional/360° still images , 2018, Signal Process. Image Commun..

[24]  Alexander Raake,et al.  GBVS360, BMS360, ProSal: Extending existing saliency prediction models from 2D to omnidirectional images , 2018, Signal Process. Image Commun..