Virtual teaching based on hand manipulability for multi-fingered robots

A virtual robot teaching that consists of human demonstration and motion-intention analysis in a virtual reality environment is an advanced technology of automatic programming for multi-fingered robots. For the virtual hand model displayed on-screen, a human-hand model is better than a robot-hand model in terms of teaching time and a stable manipulation of virtual object. However, it may occurs that a robot cannot grasp an object at a teaching position and orientation of the robot hand because the geometrical size and motional function of the robot hand is not the same as that of human hand. To solve this problem, we propose a virtual teaching based on hand manipulability, in which a position and orientation of the robot hand is determined so as to maximize a manipulability of the robot hand on the condition that the robot grasps the object at the teaching contact points on the object. Experimental results of a pick-and-place task are shown to demonstrate the effectiveness of the proposed method.

[1]  Tomoichi Takahashi,et al.  Robotic assembly operation based on task-level teaching in virtual reality , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[2]  Haruhisa Kawasaki,et al.  Based on Motion Intention in Virtual Reality , 2000 .

[3]  Haruhisa Kawasaki,et al.  Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu hand II , 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028).

[4]  H. Harry Asada,et al.  The direct teaching of tool manipulation skills via the impedance identification of human motions , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[5]  Rüdiger Dillmann,et al.  Building elementary robot skills from human demonstration , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[6]  Robert Rohling,et al.  Optimized fingertip mapping for teleoperation of dextrous robot hands , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[7]  Yoshifumi Nishida,et al.  Active understanding of human intention by a robot through monitoring of human behavior , 1994 .

[8]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion , 1993, IEEE Trans. Robotics Autom..