Robot Programming by Demonstration with Interactive Action Visualizations

Existing approaches to Robot Programming by Demonstration (PbD) require multiple demonstrations to capture task information that lets robots generalize to unseen situations. However, providing these demonstrations is cumbersome for endusers. In addition, users who are not familiar with the system often fail to demonstrate sufficiently varied demonstrations. We propose an alternative PbD framework that involves demonstrating the task once and then providing additional task information explicitly, through interactions with a visualization of the action. We present a simple action representation that supports this framework and describe a system that implements the framework on a two-armed mobile manipulator. We demonstrate the power of this system by evaluating it on a diverse task benchmark that involves manipulation of everyday objects. We then demonstrate that the system is easy to learn and use for novice users through a user study in which participants program a subset of the benchmark. We characterize the limitations of our system in task generalization and end-user interactions and present extensions that could address some of the limitations.

[1]  Allen Cypher,et al.  EAGER: programming repetitive tasks by example , 1991, CHI.

[2]  Brad A. Myers,et al.  Six Learning Barriers in End-User Programming Systems , 2004, 2004 IEEE Symposium on Visual Languages - Human Centric Computing.

[3]  Aude Billard,et al.  What is the Teacher"s Role in Robot Programming by Demonstration? - Toward Benchmarks for Improved Learning , 2007 .

[4]  Jason I. Hong,et al.  Marmite: Towards End-User Programming for the Web , 2007, IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC 2007).

[5]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[6]  Jochen J. Steil,et al.  Task-level imitation learning using variance-based movement optimization , 2009, 2009 IEEE International Conference on Robotics and Automation.

[7]  Aude Billard,et al.  Statistical Learning by Imitation of Competing Constraints in Joint Space and Task Space , 2009, Adv. Robotics.

[8]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[9]  Aude Billard,et al.  Learning nonlinear multi-variate motion dynamics for real-time position and orientation control of robotic manipulators , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.

[10]  Jochen J. Steil,et al.  Automatic selection of task spaces for imitation learning , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Scott Niekum,et al.  Learning and generalization of complex tasks from unstructured demonstrations , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Maya Cakmak,et al.  Designing robot learners that ask good questions , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[13]  Maya Cakmak,et al.  Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[14]  Sonia Chernova,et al.  A Practical Comparison of Three Robot Learning from Demonstration Algorithm , 2012, International Journal of Social Robotics.

[15]  Maxim Likhachev,et al.  Learning to Plan for Constrained Manipulation from Demonstrations , 2013, Robotics: Science and Systems.

[16]  Pieter Abbeel,et al.  Learning from Demonstrations Through the Use of Non-rigid Registration , 2013, ISRR.

[17]  Scott Niekum,et al.  Incremental Semantically Grounded Learning from Demonstration , 2013, Robotics: Science and Systems.