Robotic assembly operation teaching in a virtual environment

A "teaching by showing" system using a graphic interface is proposed. This system provides: 1) a user interface with which an inexperienced operator can easily teach a task, and 2) a task execution system that can operate the task in a different environment. The operator shows an example of assembly task movements in the virtual environmental created in the computer. A finite automaton dependently defined task is used to interpret the movements as a sequence of high-level representations of operations. When the system is commanded to operate the learning task, the system observes the task environment, checks the geometrical feasibility of the task in the environment, and if necessary replans the sequence of operations so that the robot can complete the task. Then the system uses task-dependent interpretation rules to translate the sequence of operations into manipulator-level commands and executes the task by replicating the operator's movement in the virtual environment. >

[1]  Mohan M. Trivedi,et al.  Simulation and graphical interface for programming and visualization of sensor-based robot operation , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[2]  Masayuki Inaba,et al.  Design and implementation of a system that generates assembly programs from visual recognition of human action sequences , 1990, EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications.

[3]  Tomás Lozano-Pérez,et al.  Task-level planning of pick-and-place robot motions , 1989, Computer.

[4]  Katsushi Ikeuchi,et al.  Towards An Assembly Plan From Observation: Part II: Correction Of Motion parameters Based On Fact Contact Constraints , 1992, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  H. Harry Asada,et al.  The direct teaching of tool manipulation skills via the impedance identification of human motions , 1988, Proceedings. 1988 IEEE International Conference on Robotics and Automation.

[6]  Tomoichi Takahashi,et al.  Hand gesture coding based on experiments using a hand gesture interface device , 1991, SGCH.

[7]  T. Maeda,et al.  Tele-existence simulator with artificial reality (1)- design and evaluation of a binocular visual display using solid models- , 1988, IEEE International Workshop on Intelligent Robots.

[8]  A. Bejczy,et al.  Graphics displays for operator aid in telemanipulation , 1991, Conference Proceedings 1991 IEEE International Conference on Systems, Man, and Cybernetics.

[9]  Tomoichi Takahashi,et al.  Teaching robot's movement in virtual reality , 1991, Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91.

[10]  Avinash C. Kak,et al.  Spar: A Planner that Satisfies Operational and Geometric Goals in Uncertain Environments , 1990, AI Mag..

[11]  Tomoichi Takahashi,et al.  Robotic assembly operation based on task-level teaching in virtual reality , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[12]  Tomás Lozano-Pérez,et al.  Spatial Planning: A Configuration Space Approach , 1983, IEEE Transactions on Computers.

[13]  Ikuo Takeuchi,et al.  Image based operation: a human-robot interaction architecture for intelligent manufacturing , 1989, 15th Annual Conference of IEEE Industrial Electronics Society.