Computing Camera Viewpoints in an Active Robot Work Cell

This paper presents a dynamic sensor-planning system that is capable of planning the locations and settings of vision sensors for use in an environment containing objects moving in known ways. The key component of this research is the computation of the camera position, orientation, and optical settings to be used over a time interval. A new algorithm is presented for viewpoint computation which ensures that the feature-detectability constraints of focus, resolution, field of view, and visibility are satisfied. A five-degree-of-freedom Cartesian robot carrying a CCD camera in a hand/eye configuration and surrounding the work cell of a Puma 560 robot was constructed for performing sensor-planning experiments. The results of these experiments, demonstrating the use of this system in a robot work cell, are presented.

[1]  Glenn H. Tarbox,et al.  IVIS: An Integrated Volumetric Inspection System , 1995, Comput. Vis. Image Underst..

[2]  Cregg K. Cowan Model-based synthesis of sensor location , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[3]  Alexei Sourin,et al.  Function representation for sweeping by a moving solid , 1995, SMA '95.

[4]  Steven Abrams Sensor planning in an active robot work-cell , 1997 .

[5]  Eric Paul Krotkov,et al.  Active Computer Vision by Cooperative Focus and Stereo , 1989, Springer Series in Perception Engineering.

[6]  Aviv Bergman,et al.  Determining the camera and light source location for a visual task , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[7]  K. Tarabanis,et al.  Planning viewpoints that simultaneously satisfy several feature detectability constraints for robotic vision , 1991, Fifth International Conference on Advanced Robotics 'Robots in Unstructured Environments.

[8]  Steven A. Shafer,et al.  What is the center of the image? , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  William E. Lorensen,et al.  Implicit modeling of swept surfaces and volumes , 1994, Proceedings Visualization '94.

[10]  Stephen Alan Cameron,et al.  Modelling solids in motion , 1984 .

[11]  J. O'Rourke Art gallery theorems and algorithms , 1987 .

[12]  Yoshiaki Shirai,et al.  Illumination setup planning for a hand-eye system based on an environmental model , 1991, Adv. Robotics.

[13]  Masayoshi Kakikura,et al.  Occlusion avoidance of visual sensors based on a hand-eye action simulator system: HEAVEN , 1987, Adv. Robotics.

[14]  Roger Y. Tsai,et al.  Analytical characterization of the feature detectability constraints of resolution, focus, and field-of-view for vision sensor planning , 1994 .

[15]  K. Tarabanis Sensor planning and modeling for machine vision tasks , 1992 .

[16]  Sun Jiu A Survey of Sensor Planning in Computer Vision , 2001 .

[17]  Konstantinos A. Tarabanis,et al.  The MVP sensor planning system for robotic vision tasks , 1995, IEEE Trans. Robotics Autom..

[18]  Peter Kovesi,et al.  Automatic Sensor Placement from Vision Task Requirements , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  James U. Korein,et al.  A geometric investigation of reach , 1985 .

[20]  Anil Kaul Computing Minkowski sums , 1993 .

[21]  Peter K. Allen,et al.  Computing swept volumes , 2000, Comput. Animat. Virtual Worlds.

[22]  Ralph R. Martin,et al.  Sweeping of three-dimensional objects , 1990, Comput. Aided Des..

[23]  K. K. Wang,et al.  Geometric Modeling for Swept Volume of Moving Solids , 1986, IEEE Computer Graphics and Applications.

[24]  Alexei Sourin,et al.  Function Representation for Sweeping by a Moving Solid , 1996, IEEE Trans. Vis. Comput. Graph..

[25]  Konstantinos A. Tarabanis,et al.  Computing Occlusion-Free Viewpoints , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Roger Y. Tsai,et al.  A new technique for fully autonomous and efficient 3D robotics hand/eye calibration , 1988, IEEE Trans. Robotics Autom..

[27]  Konstantinos A. Tarabanis,et al.  A survey of sensor planning in computer vision , 1995, IEEE Trans. Robotics Autom..

[28]  Bharath R. Modayur,et al.  Edge-based placement of camera and light source for object recognition and location , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[29]  Ming C. Leu,et al.  Geometric Representation of Swept Volumes with Application to Polyhedral Objects , 1990, Int. J. Robotics Res..