Relational Learning by Imitation

Imitative learning can be considered an essential task of humans development. People use instructions and demonstrations provided by other human experts to acquire knowledge. In order to make an agent capable of learning through demonstrations, we propose a relational framework for learning by imitation. Demonstrations and domain specific knowledge are compactly represented by a logical language able to express complex relational processes. The agent interacts in a stochastic environment and incrementally receives demonstrations. It actively interacts with the human by deciding the next action to execute and requesting demonstration from the expert based on the current learned policy. The framework has been implemented and validated with experiments in simulated agent domains.

[1]  Stan Matwin,et al.  18th European Conference on Machine Learning , 2007 .

[2]  Tony Belpaeme,et al.  A computational model of intention reading in imitation , 2006, Robotics Auton. Syst..

[3]  Leslie Pack Kaelbling,et al.  Effective reinforcement learning for mobile robots , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[4]  Michael Rovatsos,et al.  Capturing agent autonomy in roles and XML , 2003, AAMAS '03.

[5]  Stefan Schaal,et al.  http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained , 2007 .

[6]  Darrin C. Bentivegna,et al.  Learning From Observation and Practice Using Primitives , 2004 .

[7]  Alan Bundy,et al.  Logic Program Synthesis via Proof Planning , 1992, LOPSTR.

[8]  Joost N. Kok Machine Learning: ECML 2007, 18th European Conference on Machine Learning, Warsaw, Poland, September 17-21, 2007, Proceedings , 2007, ECML.

[9]  C. Boutilier,et al.  Accelerating Reinforcement Learning through Implicit Imitation , 2003, J. Artif. Intell. Res..

[10]  A. Meltzoff The 'like me' framework for recognizing and becoming an intentional agent. , 2007, Acta psychologica.

[11]  Nicola Fanizzi,et al.  Incremental learning and concept drift in INTHELEX , 2004, Intell. Data Anal..

[12]  Roland Siegwart,et al.  Robot learning from demonstration , 2004, Robotics Auton. Syst..

[13]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[14]  Donato Malerba,et al.  Ideal Refinement of Datalog Programs , 1995, LOPSTR.

[15]  Manuela M. Veloso,et al.  Confidence-based policy learning from demonstration using Gaussian mixture models , 2007, AAMAS '07.

[16]  Stefan Schaal,et al.  Robot Learning From Demonstration , 1997, ICML.

[17]  Rajesh P. N. Rao,et al.  Imitation Learning Using Graphical Models , 2007, ECML.

[18]  Gordon Cheng,et al.  Discovering optimal imitation strategies , 2004, Robotics Auton. Syst..

[19]  Monica N. Nicolescu,et al.  Natural methods for robot task learning: instructive demonstrations, generalization and practice , 2003, AAMAS '03.

[20]  Stefan Schaal,et al.  Learning from Demonstration , 1996, NIPS.

[21]  Tony Jebara,et al.  Statistical imitative learning from perceptual data , 2002, Proceedings 2nd International Conference on Development and Learning. ICDL 2002.

[22]  Saso Dzeroski,et al.  Inductive Logic Programming: Techniques and Applications , 1993 .

[23]  Gordon Cheng,et al.  Learning from Observation and from Practice Using Behavioral Primitives , 2003, ISRR.

[24]  H. Bekkering,et al.  Action generation and action perception in imitation: an instance of the ideomotor principle. , 2003, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.