Object-level saliency detection based on spatial compactness assumption

Object-level saliency detection is an important aspect of visual saliency. Most existing methods build on the contrast assumption. It tends to highlight the saliency of the regions with high contrast in a certain context, but it does not work well in some scenarios. In this paper, we propose a novel spatial compactness assumption which considers that salient regions are spatially more compact than background regions. Based on it, we present two object-level saliency detection methods: the patch-based method and the region-based method. In the experiments, both methods are compared with nine state-of-the-art methods on a public dataset and the best performances are obtained. The experimental results show that the spatial compactness assumption is valid and the proposed methods can uniformly highlight salient objects, even for large ones.

[1]  Shi-Min Hu,et al.  Global contrast based salient region detection , 2011, CVPR 2011.

[2]  S. Süsstrunk,et al.  Frequency-tuned salient region detection , 2009, CVPR 2009.

[3]  John K. Tsotsos,et al.  Saliency Based on Information Maximization , 2005, NIPS.

[4]  Nuno Vasconcelos,et al.  The discriminant center-surround hypothesis for bottom-up saliency , 2007, NIPS.

[5]  David Salesin,et al.  Gaze-based interaction for semi-automatic photo cropping , 2006, CHI.

[6]  Lihi Zelnik-Manor,et al.  Context-Aware Saliency Detection , 2012, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Tien-Tsin Wong,et al.  Resizing by symmetry-summarization , 2010, ACM Trans. Graph..

[9]  Nanning Zheng,et al.  Learning to Detect a Salient Object , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[10]  Benjamin W Tatler,et al.  The central fixation bias in scene viewing: selecting an optimal viewing position independently of motor biases and image feature distributions. , 2007, Journal of vision.

[11]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[12]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[13]  Liqing Zhang,et al.  Dynamic visual attention: searching for coding length increments , 2008, NIPS.

[14]  Frédéric Jurie,et al.  Sampling Strategies for Bag-of-Features Image Classification , 2006, ECCV.

[15]  Liming Zhang,et al.  Spatio-temporal Saliency detection using phase spectrum of quaternion fourier transform , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Dorin Comaniciu,et al.  Mean Shift: A Robust Approach Toward Feature Space Analysis , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Mubarak Shah,et al.  Visual attention detection in video sequences using spatiotemporal cues , 2006, MM '06.

[18]  Pietro Perona,et al.  Is bottom-up attention useful for object recognition? , 2004, CVPR 2004.

[19]  HongJiang Zhang,et al.  Contrast-based image attention analysis by using fuzzy growing , 2003, MULTIMEDIA '03.

[20]  Deepu Rajan,et al.  Random walks on graphs to model saliency in images , 2009, CVPR.

[21]  Sabine Süsstrunk,et al.  Salient Region Detection and Segmentation , 2008, ICVS.