Sensory Attention: Computational Sensor Paradigm for Low-Latency Adaptive Vision

The need for robust self‐contained and low-latency vision systems is growing: high speed visual servoing and vision‐based human computer interface. Conventional vision systems can hardly meet this need because 1) the latency is incurred in a data transfer and computational bottlenecks, and 2) there is no top‐down feedback to adapt sensor performance for improved robustness. In this paper we present a tracking computational sensor — a VLSI implementation of a sensory attention. The tracking sensor focuses attention on a salient feature in its receptive field and maintains this attention in the world coordinates. Using both low ‐latency massive parallel processing and top‐down sensory adaptation, the sensor reliably tracks features of interest while it suppresses other irrelevant features that may interfere with the task at hand.

[1]  李幼升,et al.  Ph , 1989 .

[2]  Takeo Kanade,et al.  A sorting image sensor: an example of massively parallel intensity-to-time processing for low-latency computational sensors , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[3]  T. G. Morris,et al.  Analog VLSI circuits for covert attentional shifts , 1996, Proceedings of Fifth International Conference on Microelectronics for Neural Networks.

[4]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[5]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[6]  John Lazzaro,et al.  Winner-Take-All Networks of O(N) Complexity , 1988, NIPS.

[7]  Andreas G. Andreou,et al.  Current-mode subthreshold MOS circuits for analog VLSI neural systems , 1991, IEEE Trans. Neural Networks.