Frequency-tuned salient region detection

Detection of visually salient image regions is useful for applications like object segmentation, adaptive compression, and object recognition. In this paper, we introduce a method for salient region detection that outputs full resolution saliency maps with well-defined boundaries of salient objects. These boundaries are preserved by retaining substantially more frequency content from the original image than other existing techniques. Our method exploits features of color and luminance, is simple to implement, and is computationally efficient. We compare our algorithm to five state-of-the-art salient region detection methods with a frequency domain analysis, ground truth, and a salient object segmentation application. Our method outperforms the five algorithms both on the ground-truth evaluation and on the segmentation task by achieving both higher precision and better recall.

[1]  David Marr,et al.  VISION A Computational Investigation into the Human Representation and Processing of Visual Information , 2009 .

[2]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[3]  D. V. van Essen,et al.  A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information , 1993, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[4]  John K. Tsotsos,et al.  Modeling Visual Attention via Selective Tuning , 1995, Artif. Intell..

[5]  Christof Koch,et al.  Comparison of feature combination strategies for saliency-based visual attention systems , 1999, Electronic Imaging.

[6]  Touradj Ebrahimi,et al.  Christopoulos: Thc Jpeg2000 Still Image Coding System: an Overview the Jpeg2000 Still Image Coding System: an Overview , 2022 .

[7]  Peter Meer,et al.  Synergism in low level vision , 2002, Object recognition supported by user interaction for service robots.

[8]  HongJiang Zhang,et al.  Contrast-based image attention analysis by using fuzzy growing , 2003, MULTIMEDIA '03.

[9]  Xing Xie,et al.  Salient Region Detection Using Weighted Feature Maps Based on the Human Visual Attention Model , 2004, PCM.

[10]  Andrew Zisserman,et al.  An Affine Invariant Salient Region Detector , 2004, ECCV.

[11]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[12]  Pietro Perona,et al.  Is bottom-up attention useful for object recognition? , 2004, CVPR 2004.

[13]  Byoung Chul Ko,et al.  Object-of-interest image segmentation based on human attention and semantic region clustering. , 2006, Journal of the Optical Society of America. A, Optics, image science, and vision.

[14]  King Ngi Ngan,et al.  Unsupervised extraction of visual attention objects in color images , 2006, IEEE Transactions on Circuits and Systems for Video Technology.

[15]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[16]  Simone Frintrop,et al.  A Real-time Visual Attention System Using Integral Images , 2007, ICVS 2007.

[17]  Nuno Vasconcelos,et al.  Bottom-up saliency is a discriminant process , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[18]  S. Avidan,et al.  Seam carving for content-aware image resizing , 2007, SIGGRAPH 2007.

[19]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Learning to Detect A Salient Object , 2007, CVPR.

[21]  Sabine Süsstrunk,et al.  Salient Region Detection and Segmentation , 2008, ICVS.

[22]  Baoxin Li,et al.  A two-stage approach to saliency detection in images , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[23]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[24]  Pierre Baldi,et al.  Bayesian surprise attracts human attention , 2005, Vision Research.

[25]  John K. Tsotsos,et al.  Attention based on information maximization , 2010 .