Natural methods for robot task learning: instructive demonstrations, generalization and practice

Among humans, teaching various tasks is a complex process which relies on multiple means for interaction and learning, both on the part of the teacher and of the learner. Used together, these modalities lead to effective teaching and learning approaches, respectively. In the robotics domain, task teaching has been mostly addressed by using only one or very few of these interactions. In this paper we present an approach for teaching robots that relies on the key features and the general approach people use when teaching each other: first give a demonstration, then allow the learner to refine the acquired capabilities by practicing under the teacher's supervision, involving a small number of trials. Depending on the quality of the learned task, the teacher may either demonstrate it again or provide specific feedback during the learner's practice trial for further refinement. Also, as people do during demonstrations, the teacher can provide simple instructions and informative cues, increasing the performance of learning. Thus, instructive demonstrations, generalization over multiple demonstrations and practice trials are essential features for a successful human-robot teaching approach. We implemented a system that enables all these capabilities and validated these concepts with a Pioneer 2DX mobile robot learning tasks from multiple demonstrations and teacher feedback.

[1]  Katsushi Ikeuchi,et al.  Towards an assembly plan from observation. I. Assembly task recognition using face-contact relations (polyhedral objects) , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[2]  Enrique Vidal,et al.  Inference of k-Testable Languages in the Strict Sense and Application to Syntactic Pattern Recognition , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Monica N. Nicolescu,et al.  A hierarchical architecture for behavior-based robots , 2002, AAMAS '02.

[4]  Gillian M. Hayes,et al.  A Robot Controller Using Learning by Imitation , 1994 .

[5]  Gillian M. Hayes,et al.  Imitation as a dual-route process featuring prediction and learning components: A biologically plaus , 2002 .

[6]  Guido Bugmann,et al.  Mobile robot programming using natural language , 2002, Robotics Auton. Syst..

[7]  Ewald von Puttkamer,et al.  A behavior-based mobile robot architecture for Learning from Demonstration , 2001, Robotics Auton. Syst..

[8]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[9]  Katsushi Ikeuchi,et al.  Towards an assembly plan from observation : fine localization based on face contact constraints , 1991 .

[10]  Pierre-Yves Oudeyer,et al.  Robotic clicker training , 2002, Robotics Auton. Syst..

[11]  Pradeep K. Khosla,et al.  A Multi-Agent System for Programming Robotic Agents by Human Demonstration , 1998 .

[12]  Monica N. Nicolescu,et al.  Learning and interacting in human-robot domains , 2001, IEEE Trans. Syst. Man Cybern. Part A.

[13]  Pradeep K. Khosla,et al.  Towards gesture-based programming: shape from motion primordial learning of sensorimotor primitives , 1997, Robotics Auton. Syst..

[14]  Francisco Casacuberta,et al.  Local Languages, the Succesor Method, and a Step Towards a General Methodology for the Inference of Regular Grammars , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Barry Brian Werger,et al.  Ayllu: Distributed Port-Arbitrated Behavior-Based Control , 2000 .

[16]  Ronald C. Arkin,et al.  An Behavior-based Robotics , 1998 .

[17]  Michael Kaiser,et al.  Transfer of Elementary Skills via Human-Robot Interaction , 1997, Adapt. Behav..