Visual Search in Static and Dynamic Scenes Using Fine-Grain Top-Down Visual Attention

Artificial visual attention is one of the key methodologies inspired from nature that can lead to robust and efficient visual search by machine vision systems. A novel approach is proposed for modeling of top-down visual attention in which separate saliency maps for the two attention pathways are suggested. The maps for the bottom-up pathway are built using unbiased rarity criteria while the top-down maps are created using fine-grain feature similarity with the search target as suggested by the literature on natural vision. The model has shown robustness and efficiency during experiments on visual search using natural and artificial visual input under static as well as dynamic scenarios.

[1]  Bärbel Mertsching,et al.  Evaluation of Visual Attention Models for Robots , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[2]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[3]  Muhammad Zaheer Aziz,et al.  Pop-out and IOR in Static Scenes with Region B ased Visual Attention , 2007 .

[4]  Laurent Itti,et al.  Top-down attention selection is fine grained. , 2006, Journal of vision.

[5]  Bärbel Mertsching,et al.  Color Saliency and Inhibition Using Static and Dynamic Scenes in Region Based Visual Attention , 2008, WAPCV.

[6]  John K. Tsotsos,et al.  Attention and Performance in Computational Vision , 2008 .

[7]  Laurent Itti,et al.  Beyond bottom-up: Incorporating task-dependent influences into a computational model of spatial attention , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Bärbel Mertsching,et al.  Modeling and Simulating Mobile Robot Environments , 2008, GRAPP.

[9]  Simone Frintrop,et al.  Goal-Directed Search with a Top-Down Modulated Computational Attention System , 2005, DAGM-Symposium.

[10]  Hemant D. Tagare,et al.  A Maximum-Likelihood Strategy for Directing Attention during Visual Search , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Linda J. Lanyon,et al.  A Model of Object-Based Attention That Guides Active Visual Search to Behaviourally Relevant Locations , 2004, WAPCV.

[12]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[13]  Fred Henrik Hamker,et al.  Modeling Attention: From Computational Neuroscience to Computer Vision , 2004, WAPCV.

[14]  Bärbel Mertsching,et al.  Data- and Model-Driven Gaze Control for an Active-Vision System , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  L. Itti,et al.  Modeling the influence of task on attention , 2005, Vision Research.

[16]  Laurent Itti,et al.  Optimal cue selection strategy , 2005, NIPS.

[17]  Robert B. Fisher,et al.  Object-based visual attention for computer vision , 2003, Artif. Intell..

[18]  PIERRE VAN DE LAAR,et al.  Task-Dependent Learning of Attention , 1997, Neural Networks.

[19]  Gustavo Deco,et al.  The Computational Neuroscience ofVisual Cognition: Attention, Memory and Reward , 2004, WAPCV.

[20]  Bärbel Mertsching,et al.  Fast and Robust Generation of Feature Maps for Region-Based Visual Attention , 2008, IEEE Transactions on Image Processing.

[21]  John K. Tsotsos,et al.  Attention in Cognitive Systems, 5th International Workshop on Attention in Cognitive Systems, WAPCV 2008, Fira, Santorini, Greece, May 12, 2008, Revised Selected Papers , 2009, WAPCV.

[22]  Jannik Fritsch,et al.  Towards a Human-like Vision System for R esource-Constrained Intelligent Cars , 2007 .