The task of sensor planning for object search is formulated and a mechanism for "where to look next" for this task is presented. The searcher is assumed to be a mobile platform equipped with an active camera and a method of calculating depth, like stereo or a laser range finder. The formulation casts sensor planning as an optimization problem: the goal is to maximize the probability of detecting the target object with minimal cost. The search space is thus characterized by the probability distribution of the presence of the target. The control of the sensing parameters depends on the current state of the search space and the detecting ability of the recognition algorithm. In order to represent the environment and to efficiently determine the sensing parameters over time, a concept called the sensed sphere is proposed and its construction, using a laser range finder, is derived. The result of each sensing operation is used to update the status of the search space.
[1]
Ruzena Bajcsy,et al.
How to Decide From the First View Where to Look Next
,
1990
.
[2]
T. Garvey.
Perceptual strategies for purposive vision
,
1975
.
[3]
John K. TsotsosDepartment.
Sensor Planning for Object Search Sensor Planning for 3d Object Search
,
1996
.
[4]
D. Ballard,et al.
Gaze Selection for Visual Search
,
1994
.
[5]
John K. Tsotsos,et al.
Active object recognition
,
1992,
Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[6]
Christopher M. Brown,et al.
Where to Look Next Using a Bayes Net: Incorporating Geometric Relations
,
1992,
ECCV.
[7]
B. O. Koopman.
Search and Screening: General Principles and Historical Applications
,
1980
.
[8]
John K. Tsotsos,et al.
Laser eye: a new 3D sensor for active vision
,
1993,
Other Conferences.