A reinforcement learning model of selective visual attention

This paper proposes a model of selective attention for visual search tasks, based on a framework for sequential decision-making. The model is implemented using a fixed pan-tilt-zoom camera in a visually cluttered lab environment, which samples the environment at discrete time steps. The agent has to decide where to fixate next based purely on visual information, in order to reach the region where a target object is most likely to be found. The model consists of two interacting modules. A reinforcement learning module learns a policy on a set of regions in the room for reaching the target object, using as objective function the expected value of the sum of discounted rewards. By selecting an appropriate gaze direction at each step, this module provides top-down control in the selection of the next fixation point. The second module performs “within fixation” processing, based exclusively on visual information. Its purpose is twofold: to provide the agent with a set of locations of interest in the current image, and to perform the detection and identification of the target object. Detailed experimental results show that the number of saccades to a target object significantly decreases with the number of training epochs. The results also show the learned policy to find the target object is invariant to small physical displacements as well as object inversion.

[1]  John K. Tsotsos,et al.  Modeling Visual Attention via Selective Tuning , 1995, Artif. Intell..

[2]  Andrew G. Barto,et al.  Reinforcement learning , 1998 .

[3]  C. Koch,et al.  A saliency-based search mechanism for overt and covert shifts of visual attention , 2000, Vision Research.

[4]  S Ullman,et al.  Shifts in selective visual attention: towards the underlying neural circuitry. , 1985, Human neurobiology.

[5]  Andrew McCallum,et al.  Reinforcement learning with selective perception and hidden state , 1996 .

[6]  Francisco J. Vico,et al.  Residual Q-Learning Applied to Visual Attention , 1996, ICML.

[7]  Martin Jägersand,et al.  Saliency Maps and Attention Selection in Scale and Spatial Coordinates: An Information Theoretic Approach , 1995, ICCV.

[8]  M. Pickering,et al.  Eye guidance in reading and scene perception , 1998 .

[9]  Dana H. Ballard,et al.  Animate Vision , 1991, Artif. Intell..

[10]  Richard S. Sutton,et al.  Introduction to Reinforcement Learning , 1998 .

[11]  Andrew Hollingworth,et al.  Eye Movements During Scene Viewing: An Overview , 1998 .

[12]  Ben J. A. Kröse,et al.  Learning from delayed rewards , 1995, Robotics Auton. Syst..

[13]  Alex Pentland,et al.  Active gesture recognition using partially observable Markov decision processes , 1996, Proceedings of 13th International Conference on Pattern Recognition.

[14]  S M Anstis,et al.  Letter: A chart demonstrating variations in acuity with retinal position. , 1974, Vision research.

[15]  Martin L. Puterman,et al.  Markov Decision Processes: Discrete Stochastic Dynamic Programming , 1994 .