Visually guided grasping in 3D

This paper addresses the problem of planning two fingered grasps of unmodelled 3D objects using visual information. A family of simultaneous, interacting contour trackers is used to pick the top object off a pile, viewed from a camera on the end of a robot arm. Then a closed contour localises the silhouette of the chosen object. Geometric information is obtained from the analysis of image motion as the robot executes deliberate world motions around its vantage point. This is used to guide the selection of new vantage points in search of a view that contains a more favourable grasp on the new rim (the surface curve that projects to the silhouette). In this way, costly global reconstruction of object surfaces is avoided and planning is coupled with sensing in an efficient manner.<<ETX>>

[1]  R. Guckenberger Reconstruction of Surface Profiles from Shadowing Data , 1984 .

[2]  Gérard G. Medioni,et al.  Object modelling by registration of multiple range images , 1992, Image Vis. Comput..

[3]  F. Frances Yao,et al.  Computational Geometry , 1991, Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity.

[4]  Gérard G. Medioni,et al.  Object modeling by registration of multiple range images , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[5]  Jean Ponce,et al.  On Computing Two-Finger Force-Closure Grasps of Curved 2D Objects , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[6]  Andrew Blake,et al.  Planning planar grasps of smooth contours , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[7]  Andrew Blake,et al.  Visual navigation around curved obstacles , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[8]  Andrew Blake,et al.  Computational modelling of hand-eye coordination , 1992 .