Hand action perception for robot programming

This paper presents a general and robust approach to hand action perception for automatic robot programming using depth image sequences. The human instructor must simply demonstrate an assembly task in front of a vision system in the human world; no dataglove or special markings are necessary. The recorded image sequences are used to recover a depth image sequence for model-based human hand and object tracking to form the perceptual data stream. The data stream is then segmented and interpreted for generating a task sequence which describes the human hand action and the relationship between the manipulated object and the hand. The task sequence might be composed of a series of subtasks and each subtask involves four phases: approaching, pre-manipulating, manipulating and departing. In this paper we also discuss a robot system that replicates the observed task and automatically validates the replication results in the robot world.

[1]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[2]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[3]  Barry Shepherd Applying visual programming to robotics , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[4]  Avinash C. Kak,et al.  Automatic learning of assembly tasks using a DataGlove system , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[5]  Hari Das Nayar,et al.  Towards a training methodology for skilled teleoperation , 1995, IEEE Trans. Syst. Man Cybern..

[6]  Marc Levoy,et al.  Zippered polygon meshes from range images , 1994, SIGGRAPH.

[7]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-recognizing a grasp from observation , 1993, IEEE Trans. Robotics Autom..

[8]  H. Harry Asada,et al.  The direct teaching of tool manipulation skills via the impedance identification of human motions , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[9]  Sing Bing Kang,et al.  Robot instruction by human demonstration , 1995 .

[10]  Tomás Lozano-Pérez,et al.  Automatic Planning of Manipulator Transfer Movements , 1981, IEEE Transactions on Systems, Man, and Cybernetics.

[11]  Tomoichi Takahashi,et al.  Robotic assembly operation teaching in a virtual environment , 1994, IEEE Trans. Robotics Autom..

[12]  Katsushi Ikeuchi,et al.  Modeling sensor detectability with the VANTAGE geometric/sensor modeler , 1989, IEEE Trans. Robotics Autom..

[13]  Joe Jackson,et al.  Knowledge-based prehension: capturing human dexterity , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[14]  Katsushi Ikeuchi,et al.  Sensor Modeling, Probabilistic Hypothesis Generation, and Robust Localization for Object Recognition , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Ikuo Takeuchi,et al.  Image based operation: a human-robot interaction architecture for intelligent manufacturing , 1989, 15th Annual Conference of IEEE Industrial Electronics Society.

[16]  S. B. Kang,et al.  An Active Multibaseline Stereo System with Real-Time Image Acquisition , 1994 .