Interactive robot task training through dialog and demonstration

Effective human/robot interfaces which mimic how humans interact with one another could ultimately lead to robots being accepted in a wider domain of applications. We present a framework for interactive task training of a mobile robot where the robot learns how to do various tasks while observing a human. In addition to observation, the robot listens to the human's speech and interprets the speech as behaviors that are required to be executed. This is especially important where individual steps of a given task may have contingencies that have to be dealt with depending on the situation. Finally, the context of the location where the task takes place and the people present factor heavily into the robot's interpretation of how to execute the task. In this paper, we describe the task training framework, describe how environmental context and communicative dialog with the human help the robot learn the task, and illustrate the utility of this approach with several experimental case studies.

[1]  Jay G. Wilpon,et al.  Voice communication between humans and machines , 1994 .

[2]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[3]  Pradeep K. Khosla,et al.  Towards gesture-based programming: shape from motion primordial learning of sensorimotor primitives , 1997, Robotics Auton. Syst..

[4]  Paul E. Rybski,et al.  Interactive task training of a mobile robot through human gesture recognition , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[5]  Wolfram Burgard,et al.  Monte Carlo localization for mobile robots , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[6]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[7]  Magdalena D. Bugajska,et al.  Building a Multimodal Human-Robot Interface , 2001, IEEE Intell. Syst..

[8]  Guido Bugmann,et al.  Mobile robot programming using natural language , 2002, Robotics Auton. Syst..

[9]  Manuela M. Veloso,et al.  Analyzing Plans with Conditional Effects , 2002, AIPS.

[10]  M. Veloso,et al.  DISTILL : Towards Learning Domain-Specific Planners by Example , 2002 .

[11]  Manuela M. Veloso,et al.  DISTILL: Learning Domain-Specific Planners by Example , 2003, ICML.

[12]  Manuela M. Veloso,et al.  Fast and accurate vision-based pattern detection and identification , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[13]  Diane J. Cook,et al.  User-guided reinforcement learning of robot assistive tasks for an intelligent environment , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[14]  Monica N. Nicolescu,et al.  Linking perception and action in a control architecture for human-robot domains , 2003, 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the.

[15]  Darrin C. Bentivegna,et al.  Learning from Observation and Practice at the Action Generation Level , 2003 .

[16]  Monica N. Nicolescu,et al.  Natural methods for robot task learning: instructive demonstrations, generalization and practice , 2003, AAMAS '03.

[17]  D. Sofge,et al.  Human-Robot Collaboration and Cognition with an Autonomous Mobile Robot , 2003 .

[18]  Brett Browning,et al.  CAMEO: Camera Assisted Meeting Event Observer , 2007 .

[19]  Meir Kalech,et al.  Towards a Comprehensive Framework for Teamwork in Behavior-Based Robots , 2004 .

[20]  Deb Roy,et al.  Grounded Situation Models for Robots: Where words and percepts meet , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Chrystopher L. Nehaniv,et al.  Teaching robots by moulding behavior and scaffolding the environment , 2006, HRI '06.