Fusion of visual salience maps for object acquisition

The paradigm of visual attention has been widely investigated and applied to many computer vision applications. In this study, the authors propose a new saliency-based visual attention algorithm applied to object acquisition. The proposed algorithm automatically extracts points of visual attention (PVA) in the scene, based on different feature saliency maps. Each saliency map represents a specific feature domain, such as textural, contrast, and statistical-based features. A feature selection, based on probability of detection and false alarm rate and repeatability criteria, is proposed to choose the most efficient feature combination for saliency map. Motivated by the assumption that the extracted PVA represents the most visually salient regions in the image, they suggest using the visual attention approach for object acquisition. A comparison with other well-known algorithms for point of interest detection shows that the proposed algorithm performs better. The proposed algorithm was successfully tested on synthetic, charge-coupled device (CCD), and infrared (IR) images. Evaluation of the algorithm for object acquisition, based on ground truth, is carried out using synthetic images, which contain multiple examples of objects, with various sizes and brightness levels. A high probability of correct detection (greater than 90%) with a low false alarm rate (about 20 false alarms per image) was achieved.

[1]  Mohamed Abdel-Mottaleb,et al.  Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition , 2016, IEEE Transactions on Information Forensics and Security.

[2]  Cordelia Schmid,et al.  Scale & Affine Invariant Interest Point Detectors , 2004, International Journal of Computer Vision.

[3]  Edward H. Adelson,et al.  The Laplacian Pyramid as a Compact Image Code , 1983, IEEE Trans. Commun..

[4]  Nicu Sebe,et al.  Evaluation of Salient Point Techniques , 2002, CIVR.

[5]  Arun Kumar Sangaiah,et al.  Visual attention feature (VAF) : A novel strategy for visual tracking based on cloud platform in intelligent surveillance systems , 2018, J. Parallel Distributed Comput..

[6]  Henrik I. Christensen,et al.  Computational visual attention systems and their cognitive foundations: A survey , 2010, TAP.

[7]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[8]  Ali Borji,et al.  Salient Object Detection: A Benchmark , 2015, IEEE Transactions on Image Processing.

[9]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[10]  Gil Tidhar,et al.  Modeling human search and target acquisition performance: IV. detection probability in the cluttered environment , 1994 .

[11]  Lin Yang,et al.  Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review , 2016, IEEE Reviews in Biomedical Engineering.

[12]  B. D. Guenther,et al.  Aided and automatic target recognition based upon sensory inputs from image forming systems , 1997 .

[13]  Yehezkel Yeshurun,et al.  Context-free attentional operators: The generalized symmetry transform , 1995, International Journal of Computer Vision.

[14]  Qi Tian,et al.  A survey of recent advances in visual feature detection , 2015, Neurocomputing.

[15]  Qiaorong Zhang Huiyu Ren A Computational Model for Object-based Visual Attention , 2011 .

[16]  Hugo Guterman,et al.  NLEBS: automatic target detection using a unique nonlinear-enhancement-based system in IR images , 2000 .

[17]  Cordelia Schmid,et al.  Evaluation of Interest Point Detectors , 2000, International Journal of Computer Vision.

[18]  Nuno Vasconcelos,et al.  Decision-Theoretic Saliency: Computational Principles, Biological Plausibility, and Implications for Neurophysiology and Psychophysics , 2009, Neural Computation.

[19]  M. S. Hitam,et al.  Invariant Gabor-based interest points detector under geometric transformation , 2014, Digit. Signal Process..

[20]  Ming Zhao,et al.  Robust Infrared Maritime Target Detection Based on Visual Attention and Spatiotemporal Filtering , 2017, IEEE Transactions on Geoscience and Remote Sensing.

[21]  Qi Tian,et al.  SIFT Meets CNN: A Decade Survey of Instance Retrieval , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Christof Koch,et al.  Visual attention and target detection in cluttered natural scenes , 2001 .

[23]  Jasna Maver,et al.  Self-Similarity and Points of Interest , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[24]  Ali Borji,et al.  Quantitative Analysis of Human-Model Agreement in Visual Saliency Modeling: A Comparative Study , 2013, IEEE Transactions on Image Processing.

[25]  John F. Canny,et al.  A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  D. Tegolo,et al.  Improving Harris corner selection strategy , 2011 .

[27]  Mohan M. Trivedi,et al.  Signature strength metrics for camouflaged targets corresponding to human perceptual cues , 1998 .

[28]  Hua Zhen,et al.  Image Salient Region Extraction Algorithm Based on Improved Visual Attention Model , 2011 .

[29]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[30]  Ali Borji,et al.  State-of-the-Art in Visual Attention Modeling , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[31]  Michael H. F. Wilkinson,et al.  Efficient 2-D Grayscale Morphological Transformations With Arbitrary Flat Structuring Elements , 2008, IEEE Transactions on Image Processing.

[32]  Nuno Vasconcelos,et al.  Discriminant Saliency, the Detection of Suspicious Coincidences, and Applications to Visual Recognition , 2009, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[33]  Matthew H Tong,et al.  SUN: Top-down saliency using natural statistics , 2009, Visual cognition.

[34]  Jing Li,et al.  Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model , 2017, IEEE Transactions on Image Processing.

[35]  Claudio M. Privitera,et al.  Algorithms for Defining Visual Regions-of-Interest: Comparison with Eye Fixations , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[36]  Badrinath Roysam,et al.  Image change detection algorithms: a systematic survey , 2005, IEEE Transactions on Image Processing.