Automatic robot programming from learned abstract task knowledge

Robots with the capability of learning new tasks from humans need the ability to transform gathered abstract task knowledge into their own representation and dimensionality. New task knowledge that has been acquired e.g. with Programming by Demonstration approaches by observing a human does not a-priori contain any robot-specific knowledge and actions, and is defined in the workspace and action space of the human demonstrator. This paper presents an approach for mapping abstract human-centered task knowledge to a robot execution system based on the target system properties. Therefore the required background knowledge about the target system is examined and defined explicitely. The mapping process is described based on this knowledge, and experiments and an evaluation are given.

[1]  Reid G. Simmons,et al.  A task description language for robot control , 1998, Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190).

[2]  Rüdiger Dillmann,et al.  Learning sequential constraints of tasks from user demonstrations , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[3]  James A. Hendler,et al.  HTN Planning: Complexity and Expressivity , 1994, AAAI.

[4]  Karen L. Myers A Procedural Knowledge Approach to Task-Level Control , 1996, AIPS.

[5]  Rüdiger Dillmann,et al.  Learning Robot Behaviour and Skills Based on Human Demonstration and Advice: The Machine Learning Paradigm , 2000 .

[6]  Robert James Firby,et al.  Adaptive execution in complex dynamic worlds , 1989 .

[7]  Rüdiger Dillmann,et al.  Towards Cognitive Robots: Building Hierarchical Task Representations of Manipulations from Human Demonstration , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[8]  Alessandro Saffiotti,et al.  The Saphira architecture: a design for autonomy , 1997, J. Exp. Theor. Artif. Intell..

[9]  Masayuki Inaba,et al.  Learning by watching: extracting reusable task knowledge from visual observation of human performance , 1994, IEEE Trans. Robotics Autom..

[10]  Stefan Schaal,et al.  Is imitation learning the route to humanoid robots? , 1999, Trends in Cognitive Sciences.

[11]  Aude Billard,et al.  Learning of Gestures by Imitation in a Humanoid Robot , 2007 .

[12]  Katsushi Ikeuchi,et al.  Task analysis based on observing hands and objects by vision , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  James A. Hendler,et al.  Semantics for HTN Planning , 1998 .

[14]  Paul Hudak,et al.  A language for declarative robotic programming , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[15]  Rüdiger Dillmann,et al.  A Flexible Task Knowledge Representation for Service Robots , 2006, IAS.

[16]  Roland Siegwart,et al.  The interactive autonomous mobile system RoboX , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  T. Belpraeme,et al.  Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions , 2006 .

[18]  Sing Bing Kang,et al.  Robot instruction by human demonstration , 1995 .

[19]  M. Arbib Coordinated control programs for movements of the hand , 1985 .