When confronted with cluttered natural environments, animals still perform orders of magnitude better than artificial vision systems in visual tasks such as orienting, target detection, navigation and scene understanding. To better understand biological visual processing, we have developed a neuromorphic model of how our visual attention is attracted towards conspicuous locations in a visual scene. It replicates processing in the dorsal ('where') visual stream in the primate brain. The model includes a bottom-up (image-based) computation of low-level color, intensity, orientation and flicker features, as well as a nonlinear spatial competition that enhances salient locations in each feature channel. All feature channels feed into a unique scalar 'saliency map' which controls where to next focus attention onto. In this article, we discuss a parallel implementation of the model which runs at 30 frames/s on a 16-CPU Beowulf cluster, and the role of flicker (temporal derivatives) cues in computing salience. We show how our simple within-feature competition for salience effectively suppresses strong but spatially widespread motion transients resulting from egomotion. The model robustly detects salient targets in live outdoors video streams, despite large variations in illumination, clutter, and rapid egomotion. The success of this approach suggests that neuromorphic vision algorithms may prove unusually robust for outdoors vision applications.
[1]
Christof Koch,et al.
Feature combination strategies for saliency-based visual attention systems
,
2001,
J. Electronic Imaging.
[2]
H. Jones,et al.
Visual cortical mechanisms detecting focal orientation discontinuities
,
1995,
Nature.
[3]
Maja J. Mataric,et al.
From insect to Internet: Situated control for networked robot teams
,
2001,
Annals of Mathematics and Artificial Intelligence.
[4]
C. Koch,et al.
Computational modelling of visual attention
,
2001,
Nature Reviews Neuroscience.
[5]
L. Itti,et al.
A neural model combining attentional orienting to object recognition: preliminary explorations on the interplay between where and what
,
2001,
2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society.
[6]
M. Cannon,et al.
A transducer model for contrast perception.
,
1991,
Vision research.
[7]
Roger D. Quinn,et al.
Posture control of a cockroach-like robot
,
1999
.
[8]
C. Koch,et al.
A saliency-based search mechanism for overt and covert shifts of visual attention
,
2000,
Vision Research.
[9]
Edward H. Adelson,et al.
The Laplacian Pyramid as a Compact Image Code
,
1983,
IEEE Trans. Commun..