Leveraging on a virtual environment for robot programming by demonstration

Abstract The Programming by Demonstration paradigm promises to reduce the complexity of robot programming. Its aim is to let robot systems learn new behaviors from a human operator demonstration. In this paper, we argue that while providing demonstrations in the real environment enables teaching of general tasks, for tasks whose essential features are known a priori demonstrating in a virtual environment may improve efficiency and reduce trainer’s fatigue. We next describe a prototype system supporting Programming by Demonstration in a virtual environment and we report results obtained exploiting simple virtual tactile fixtures in pick-and-place tasks.

[1]  R. Dillmann,et al.  Programming service tasks in household environments by human demonstration , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[2]  Tomoichi Takahashi,et al.  Robotic assembly operation teaching in a virtual environment , 1994, IEEE Trans. Robotics Autom..

[3]  Michele Amoretti,et al.  A software framework based on real-time CORBA for telerobotic systems , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[5]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[6]  Henry Lieberman,et al.  Watch what I do: programming by demonstration , 1993 .

[7]  Don H. Johnson,et al.  An Operator Interface for Teleprogramming Employing Synthetic Fixtures , 1994, Presence Teleoperators Virtual Environ..

[8]  P. Fitts The information capacity of the human motor system in controlling the amplitude of movement. , 1954, Journal of experimental psychology.

[9]  Rüdiger Dillmann,et al.  Learning Robot Behaviour and Skills Based on Human Demonstration and Advice: The Machine Learning Paradigm , 2000 .

[10]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-temporal segmentation of tasks from human hand motion , 1993, IEEE Trans. Robotics Autom..

[11]  Stefano Caselli,et al.  Multimodal user interface for remote object exploration with sparse sensory data , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[12]  Rodney A. Brooks,et al.  Intelligence Without Reason , 1991, IJCAI.

[13]  Rüdiger Dillmann,et al.  Robot Programming by Demonstration (RPD): Supporting the Induction by Human Interaction , 1996, Machine Learning.

[14]  Katsushi Ikeuchi,et al.  Extraction of essential interactions through multiple observations of human demonstrations , 2003, IEEE Trans. Ind. Electron..

[15]  Rüdiger Dillmann,et al.  Understanding users intention: programming fine manipulation tasks by demonstration , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Michele Amoretti,et al.  Designing Telerobotic Systems as Distributed CORBA-Based Applications , 2003, OTM.

[17]  B. Achiriloaie,et al.  VI REFERENCES , 1961 .

[18]  Tomoichi Takahashi,et al.  Teaching robot's movement in virtual reality , 1991, Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91.

[19]  Dinesh K. Pai,et al.  Programming contact tasks using a reality-based virtual environment integrated with vision , 1999, IEEE Trans. Robotics Autom..

[20]  Louis B. Rosenberg,et al.  Virtual fixtures: Perceptual tools for telerobotic manipulation , 1993, Proceedings of IEEE Virtual Reality Annual International Symposium.

[21]  Helge J. Ritter,et al.  Multi-modal human-machine communication for instructing robot grasping tasks , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.