Teaching for multi-fingered robots based on motion intention in virtual reality

We present teaching for multi-fingered robots based on motion intention in virtual reality. In motion intention analysis, motion is divided into plural primitive motions, those unnecessary for executing the task are deleted, and the remainder are represented by a smooth time function. Segmentation is made using 3D motion measurement of human and virtual objects and virtual reaction generated by the hand of an operator using a force-feedback glove. Analyzed motion is represented by human motion commands. A robot teaching command for the object coordinate frame is generated from a human motion command. Experimental results of a pick-and-place task verify the feasibility of proposed robot teaching.

[1]  Haruhisa Kawasaki,et al.  Dexterous anthropomorphic robot hand with distributed tactile sensor: Gifu hand II , 1999, IEEE SMC'99 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028).

[2]  Shigeyuki Sakane,et al.  A Human-Robot Interface Using an Extended Digital Desk Approach , 1998 .

[3]  Rüdiger Dillmann,et al.  Building elementary robot skills from human demonstration , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[4]  H. Harry Asada,et al.  The direct teaching of tool manipulation skills via the impedance identification of human motions , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[5]  T. Flash,et al.  The coordination of arm movements: an experimentally confirmed mathematical model , 1985, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[6]  Tomoichi Takahashi,et al.  Robotic assembly operation based on task-level teaching in virtual reality , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[7]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion , 1993, IEEE Trans. Robotics Autom..

[8]  Hiroshi Mizoguchi,et al.  Active Understanding of Human Intention by a Robot through Monitoring of Human Behavior , 1994, IROS.

[9]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..