Visual saliency detection via integrating bottom-up and top-down information

Abstract Selective attention is a process that enables biological and artificial systems to remove the redundant information and highlight the valuable regions in an image. The relevant information is determined by task driven (Top-Down (TD)) or task-independent (Bottom-Up (BU)) factors. In this paper, we present a new computational visual saliency model which uses the combination of BU and TD mechanism for extracting the relevant regions of images with man-made objects. The prior knowledge about man-made objects is the compactness and higher values of different orientations. So, by using maximum and minimum moments of phase congruency covariance and different orientations from Gabor filters, we obtain different feature maps from two mentioned attention mechanisms. Finally these maps are linearly combined which their coefficients are obtained by using the entropy of each feature map. Three region-based databases were used to examine the performance of the proposed method. The experimental results demonstrated the efficiency and effectiveness of this new visual saliency model.

[1]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[2]  Yao Zhao,et al.  Ensemble dictionary learning for saliency detection , 2014, Image Vis. Comput..

[3]  Ke Gu,et al.  Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[4]  Houbing Song,et al.  Visual Attention Model Based on Particle Filter , 2016, KSII Trans. Internet Inf. Syst..

[5]  Weisi Lin,et al.  The Analysis of Image Contrast: From Quality Assessment to Automatic Enhancement , 2016, IEEE Transactions on Cybernetics.

[6]  Heinz Hügli,et al.  Assessing the contribution of color in visual attention , 2005, Comput. Vis. Image Underst..

[7]  Marko Tscherepanow,et al.  A saliency map based on sampling an image into random rectangular regions of interest , 2012, Pattern Recognit..

[8]  Weiren Shi,et al.  Visual saliency detection via multiple background estimation and spatial distribution , 2014 .

[9]  Asha Iyer,et al.  Components of bottom-up gaze allocation in natural images , 2005, Vision Research.

[10]  Derrick J. Parkhurst,et al.  Modeling the role of salience in the allocation of overt visual attention , 2002, Vision Research.

[11]  Tim K Marks,et al.  SUN: A Bayesian framework for saliency using natural statistics. , 2008, Journal of vision.

[12]  Peter Kovesi,et al.  Phase Congruency Detects Corners and Edges , 2003, DICTA.

[13]  Jiebo Luo,et al.  iCoseg: Interactive co-segmentation with intelligent scribble guidance , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[14]  J. Wolfe,et al.  Changing your mind: on the contributions of top-down and bottom-up guidance in visual search for feature singletons. , 2003, Journal of experimental psychology. Human perception and performance.

[15]  Antonio Torralba,et al.  Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. , 2006, Psychological review.

[16]  A. Bovik,et al.  Visual search in noise: revealing the influence of structural cues by gaze-contingent classification image analysis. , 2006, Journal of vision.

[17]  Li Xu,et al.  Hierarchical Image Saliency Detection on Extended CSSD , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Ivan V. Bajic,et al.  Saliency-Aware Video Compression , 2014, IEEE Transactions on Image Processing.

[19]  Yu Fu,et al.  Visual saliency detection by spatially weighted dissimilarity , 2011, CVPR 2011.

[20]  Esa Rahtu,et al.  Fast and Efficient Saliency Detection Using Sparse Sampling and Kernel Density Estimation , 2011, SCIA.

[21]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[22]  Hanling Zhang,et al.  Aggregating complementary boundary contrast with smoothing for salient region detection , 2017, The Visual Computer.

[23]  Gregory J. Zelinsky,et al.  Scene context guides eye movements during visual search , 2006, Vision Research.

[24]  Ying-Ying Zhang,et al.  Saliency detection via two-directional 2DPCA analysis of image patches , 2014 .

[25]  Wenjun Zhang,et al.  No-Reference Quality Metric of Contrast-Distorted Images Based on Information Maximization , 2017, IEEE Transactions on Cybernetics.

[26]  Jan-Mark Geusebroek,et al.  Salient object detection: From pixels to segments , 2013, Image Vis. Comput..

[27]  Patrick Le Callet,et al.  A coherent computational approach to model bottom-up visual attention , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Wenjun Zhang,et al.  Automatic Contrast Enhancement Technology With Saliency Preservation , 2015, IEEE Transactions on Circuits and Systems for Video Technology.

[29]  Li Xu,et al.  Hierarchical Saliency Detection , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[30]  D J Field,et al.  Relations between the statistics of natural images and the response properties of cortical cells. , 1987, Journal of the Optical Society of America. A, Optics and image science.

[31]  C. Koch,et al.  Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli. , 2008, Journal of vision.

[32]  Shi-Min Hu,et al.  Global contrast based salient region detection , 2011, CVPR 2011.

[33]  Matthew H Tong,et al.  SUN: Top-down saliency using natural statistics , 2009, Visual cognition.

[34]  Berthold K. P. Horn Robot vision , 1986, MIT electrical engineering and computer science series.

[35]  Bin Jiang,et al.  Wearable Vision Assistance System Based on Binocular Sensors for Visually Impaired Users , 2019, IEEE Internet of Things Journal.

[36]  D. M. Green,et al.  Signal detection theory and psychophysics , 1966 .

[37]  Yongjun Li,et al.  A fast and efficient saliency detection model in video compressed-domain for human fixations prediction , 2017, Multimedia Tools and Applications.

[38]  Liming Zhang,et al.  Spatio-temporal Saliency detection using phase spectrum of quaternion fourier transform , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[39]  Nanning Zheng,et al.  Learning to Detect A Salient Object , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[40]  D. Burr,et al.  Mach bands are phase dependent , 1986, Nature.

[41]  John K. Tsotsos,et al.  Attention based on information maximization , 2010 .

[42]  Christof Koch,et al.  Image Signature: Highlighting Sparse Salient Regions , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[43]  Aykut Erdem,et al.  Visual saliency estimation by nonlinearly integrating features using region covariances. , 2013, Journal of vision.

[44]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[45]  Weisi Lin,et al.  Saliency-Guided Quality Assessment of Screen Content Images , 2016, IEEE Transactions on Multimedia.

[46]  S. Süsstrunk,et al.  Frequency-tuned salient region detection , 2009, CVPR 2009.

[47]  Weisi Lin,et al.  Modeling visual attention's modulatory aftereffects on visual sensitivity and quality evaluation , 2005, IEEE Transactions on Image Processing.

[48]  Ali Borji,et al.  Salient Object Detection: A Benchmark , 2015, IEEE Transactions on Image Processing.

[49]  Xiaogang Wang,et al.  Visual saliency detection using information contents weighting , 2016 .

[50]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[51]  Yael Pritch,et al.  Saliency filters: Contrast based filtering for salient region detection , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[52]  Rajesh P. N. Rao,et al.  Eye movements in iconic visual search , 2002, Vision Research.

[53]  Frédo Durand,et al.  Learning to predict where humans look , 2009, 2009 IEEE 12th International Conference on Computer Vision.