Sensor planning in 3d object search: its formu-lation and complexity

Object search is the task of searching for a given 3D object in a given 3D environment by a robot equipped with a camera. Sensor planning for object search refers to the task of how to select the sensing parameters of the camera so as to bring the target into the eld of view of the camera and to make the image of the target to be easily recognized by the available recognition algorithms. In this paper, we study the task of sensor planning for object search from the theoretical point of view. We formulate the task and point out many of its important properties. We then analyze this task from the complexity level and prove that this task is NP-Complete.

[1]  Lambert E. Wixson,et al.  Using intermediate objects to improve the efficiency of visual search , 1994, International Journal of Computer Vision.

[2]  B. O. Koopman Search and Screening: General Principles and Historical Applications , 1980 .

[3]  Christopher M. Brown,et al.  Where to Look Next Using a Bayes Net: Incorporating Geometric Relations , 1992, ECCV.

[4]  John K. Tsotsos Analyzing vision at the complexity level , 1990, Behavioral and Brain Sciences.

[5]  W. Eric L. Grimson,et al.  The combinatorics of local constraints in model-based recognition and localization from sparse data , 1984, JACM.

[6]  T. Garvey Perceptual strategies for purposive vision , 1975 .

[7]  Christos H. Papadimitriou,et al.  The complexity of recognizing polyhedral scenes , 1985, 26th Annual Symposium on Foundations of Computer Science (sfcs 1985).

[8]  Charles R. Dyer,et al.  An algorithm for constructing the aspect graph , 1986, 27th Annual Symposium on Foundations of Computer Science (sfcs 1986).

[9]  Yiming Ye,et al.  Where to look next in 3D object search , 1995, Proceedings of International Symposium on Computer Vision - ISCV.