Automatic Sensor Placement from Vision Task Requirements

The problem of automatically generating the possible camera locations for observing an object is defined, and an approach to its solution is presented. The approach, which uses models of the object and the camera, is based on meeting the requirements that: the spatial resolution be above a minimum value, all surface points be in focus, all surfaces lie within the sensor field of view and no surface points be occluded. The approach converts each sensing requirement into a geometric constraint on the sensor location, from which the three-dimensional region of viewpoints that satisfies that constraint is computed. The intersection of these regions is the space where a sensor may be located. The extension of this approach to laser-scanner range sensors is also described. Examples illustrate the resolution, focus, and field-of-view constraints for two vision tasks. >