Robot task-driven attention

Visual attention is a crucial skill in human beings in that it allows optimal deployment of visual processing and memory resources. It turns out to be even more useful in search tasks, since to select salient zones we use top-down priors, depending on the observed scene, along with bottom-up criteria. In this paper we show how we constructed a robotic model of attention, inspired by studies on human attention and gaze shifting. Our model relies on a measure of salience related to the particular type of environment and to the given task. This measure is hierarchically structured and consists of both top-down components, learned from the tutor, and bottom-up components as perceived in the scene by the robot. Hence with such a general model the robot can perform its own scan-path inside a similar environment and report on its findings.

[1]  Fiora Pirri,et al.  Spatial discrimination in task-driven attention , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[2]  Bärbel Mertsching,et al.  Evaluation of Visual Attention Models for Robots , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[3]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[4]  Peter De Graef Semantic effects on object selection in real-world scene perception , 2005 .

[5]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[6]  Giulio Sandini,et al.  Object-based Visual Attention: a Model for a Behaving Robot , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[7]  Eero P. Simoncelli,et al.  Steerable wedge filters for local orientation analysis , 1996, IEEE Trans. Image Process..

[8]  A. Treisman,et al.  A feature-integration theory of attention , 1980, Cognitive Psychology.

[9]  G. Underwood Cognitive processes in eye guidance , 2005 .

[10]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[11]  Ernst Niebur,et al.  Controlling the Focus of Visual Selective Attention , 2002 .

[12]  M. Posner,et al.  Orienting of Attention* , 1980, The Quarterly journal of experimental psychology.

[13]  Michael L. Mack,et al.  VISUAL SALIENCY DOES NOT ACCOUNT FOR EYE MOVEMENTS DURING VISUAL SEARCH IN REAL-WORLD SCENES , 2007 .

[14]  Michael C. Mozer,et al.  Space-and object-based attention , 2005 .

[15]  J. Findlay,et al.  Eye guidance and visual search , 1998 .

[16]  B. Scholl Objects and attention: the state of the art , 2001, Cognition.

[17]  Yizong Cheng,et al.  Mean Shift, Mode Seeking, and Clustering , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  J. Henderson Human gaze control during real-world scene perception , 2003, Trends in Cognitive Sciences.