Edge-based placement of camera and light source for object recognition and location

The selection and placement of cameras and light sources for a vision task is an essential step in both autonomous visual sensing and standard industrial applications. A technique is presented that determines the three-dimensional region of light-source locations such that one or more specified object edges will be detected with a given edge operator. The method uses a task description that includes a list of object edges and the edge operator to be used in order to derive constraints on image contrast, surface irradiance, and light-source location. The combination of this method with prior results on camera and light-source placement can be applied to object recognition and location tasks.<<ETX>>

[1]  Cregg K. Cowan Model-based synthesis of sensor location , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[2]  Masaru Ishii,et al.  Occlusion Avoidance of Visual Sensors based on a Hand-Eye Action Simulator System : HEAVEN , 1986 .

[3]  Berthold K. P. Horn Robot vision , 1986, MIT electrical engineering and computer science series.

[4]  Roger Y. Tsai,et al.  Viewpoint planning: the visibility constraint , 1989 .

[5]  Peter Kovesi,et al.  Automatic Sensor Placement from Vision Task Requirements , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  F. E. Nicodemus,et al.  Geometrical considerations and nomenclature for reflectance , 1977 .

[7]  Robert M. Haralick,et al.  Automatic sensor and light source positioning for machine vision , 1990, [1990] Proceedings. 10th International Conference on Pattern Recognition.

[8]  Roger Y. Tsai,et al.  Automated sensor planning for robotic vision tasks , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[9]  Aviv Bergman,et al.  Determining the camera and light source location for a visual task , 1989, Proceedings, 1989 International Conference on Robotics and Automation.