Integrating Grasp Planning and Visual Servoing for Automatic Grasping

In this paper we describe a method for aligning a robot gripper — or any other end effector — with an object. An example of such a gripper/object alignment is grasping. The task consists of, first computing an alignment condition, and second servoing the robot such that it moves and reaches the desired position. A single camera is used to provide the visual feedback necessary to estimate the location of the object to be grasped, to determine the gripper/object alignment condition, and to dynamically control the robot's motion. The original contributions of this paper are the following. Since the camera is not mounted onto the robot it is crucial to express the alignment condition such that it does not depend on the intrinsic and extrinsic camera parameters. Therefore we developp a method for expressing the alignment condition (the relative location of the gripper with respect to the object) such that it is projective invariant, i.e., it is view invariant and it does not require a calibrated camera. The central issue of any image-based servoing method is the estimation of the image Jacobian. This Jacobian relates the 3-D velocity field of a moving object to the image velocity field. In the past, the exact estimation of this Jacobian has been avoided because of the lack of a fast and robust method to estimate the pose of a 3-D object with respect to a camera. We discuss the advantage of using an exact image Jacobian with respect to the dynamic behaviour of the servoing process. From an experimental point of view, we describe a grasping experiment involving image-based object localization, grasp planning, and visual servoing.

[1]  Patrick Rives,et al.  A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..

[2]  Fadi Dornaika,et al.  Hand-Eye Calibration , 1995, Int. J. Robotics Res..

[3]  Christian Laugier,et al.  Achieving Dextrous Grasping by Integrating Planning and Vision-Based Sensing , 1995, Int. J. Robotics Res..

[4]  Gregory D. Hager,et al.  Feature-based visual servoing and its application to telerobotics , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[5]  C. S. Lee,et al.  FEATURE-BASED VISUAL SERVOING OF ROBOTIC SYSTEMS , 1993 .

[6]  Patrick Gros,et al.  Matching and Clustering: Two Steps Towards Automatic Model Generation in Computer Vision , 1993, AAAI 1993.

[7]  Fumio Miyazaki,et al.  Manipulator Control by Visual Ser-voing with the Stereo Vision , 1993 .

[8]  Gregory D. Hager,et al.  Real-time feature tracking and projective invariance as a basis for hand-eye coordination , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Fumio Miyazaki,et al.  Manipulator control by using servoing with the stereo vision , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).

[10]  Peter I. Corke,et al.  Video-rate robot visual servoing , 1993 .

[11]  Tsutomu Kimoto,et al.  Manipulator control with image-based visual servo , 1991, Proceedings. 1991 IEEE International Conference on Robotics and Automation.

[12]  Fadi Dornaika,et al.  Object pose: links between paraperspective and perspective , 1995, Proceedings of IEEE International Conference on Computer Vision.

[13]  Rajeev Sharma,et al.  On the observability of robot motion under active camera control , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[14]  Peter Corke,et al.  VISUAL CONTROL OF ROBOT MANIPULATORS – A REVIEW , 1993 .