A visual attention model based on wavelet transform and its application on ship detection

Human visual system is very efficient and selective in scene analysis, which has been widely used in image processing. In this paper, a new visual attention model based on dyadic wavelet transform (DWT) used for ship detection is proposed. It is a bottom-up visual attention model driven by data rather than by task. First, the input image is converted from RGB color space to HIS color space. Second, the modulus of DWT is analyzed to obtain the conspicuity map of each feature. Third, the conspicuity maps are combined into the saliency map nonlinearly, different from Itti's method, the contribution rate of each conspicuity map to final saliency map is not equal. It is relevant to the difference between the level of the most active region and the average level of the other active regions in each conspicuity map. Finally, the detection result of ships based on saliency map is got by region growing method, where the seed is obtained from the saliency map and the growing process is implemented in intensity image. Experiments on natural ship images show that our method is robust and efficient compared with Itti's and Hou's method.

[1]  N. Moray Attention: selective processes in vision and hearing , 1970 .

[2]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[3]  Christof Koch,et al.  Modeling attention to salient proto-objects , 2006, Neural Networks.

[4]  Liming Zhang,et al.  Spatio-temporal Saliency detection using phase spectrum of quaternion fourier transform , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Yongmei Zhang,et al.  A novel saliency map extraction method based on improved Itti's model , 2010, 2010 International Conference on Computer and Communication Technologies in Agriculture Engineering.

[6]  Liqing Zhang,et al.  Saliency Detection: A Spectral Residual Approach , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Huimin Xiao,et al.  A hierarchical computational model of visual attention using multi-layer analysis , 2010, 2010 Second International Conference on Communication Systems, Networks and Applications.

[8]  John K. Tsotsos,et al.  Modeling Visual Attention via Selective Tuning , 1995, Artif. Intell..

[9]  P.M. Engel,et al.  Visual Selective Attention Model for Robot Vision , 2008, 2008 IEEE Latin American Robotic Symposium.

[10]  Bin Wang,et al.  Pulse discrete cosine transform for saliency-based visual attention , 2009, 2009 IEEE 8th International Conference on Development and Learning.

[11]  Stéphane Mallat,et al.  Singularity detection and processing with wavelets , 1992, IEEE Trans. Inf. Theory.

[12]  Ming Zeng,et al.  Integrating Perceptual Properties of the HVS into the Computational Model of Visual Attention , 2009, 2009 2nd International Congress on Image and Signal Processing.

[13]  Ying Yu,et al.  Hebbian-Based Neural Networks for Bottom-Up Visual Attention Systems , 2009, ICONIP.

[14]  Bin Wang,et al.  Visual Attention-Based Ship Detection in SAR Images , 2010 .

[15]  Pietro Perona,et al.  Graph-Based Visual Saliency , 2006, NIPS.

[16]  C. Koch,et al.  Models of bottom-up and top-down visual attention , 2000 .

[17]  Liming Zhang,et al.  Biological Plausibility of Spectral Domain Approach for Spatiotemporal Visual Saliency , 2008, ICONIP.