Visual Top-Down Attention Framework for Robots in Dynamic Environments

In this paper a framework for flexible top-down visual attention for robots is introduced. On development time it is often not clear which objects should be in the focus of attention. On the other hand it is usually not enough computing time available to compute all possible region of interests (ROI) for each frame from the camera. Therefore we describe here a framework, allows the application client to steer the attention and compute only the necessary image processes at the time. Two possible application scenarios, RoboCup and a Servicerobot, are shown.