High-level learning from demonstration with conceptual spaces and subspace clustering

Learning from demonstration (LfD) aims at robots learning skills from human-demonstrated tasks. Robots should be able to learn at all levels of abstraction. Unlike at the level of motor primitives, high-level LfD requires symbolic representations. It thus faces the classical problem of symbol grounding. Furthermore, it requires the robot to interpret human-demonstrated actions at a higher, conceptual abstraction level. We present a method, that enables a robot to recognize human-demonstrated pick-and-place task goals on an object-relational abstraction layer. The robot can reproduce the task goals in new situations using a symbolic planner. We show that in a robotic context conceptual spaces can serve as a mean for symbol grounding at an object-relational level as well as for the recognition of conceptual similarities in effects of human-demonstrated actions. The method is evaluated in experiments on a real robot.

[1]  Hans-Peter Kriegel,et al.  A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.

[2]  Stevan Harnad The Symbol Grounding Problem , 1999, ArXiv.

[3]  James J. Kuffner,et al.  OpenRAVE: A Planning Architecture for Autonomous Robotics , 2008 .

[4]  Silvia Coradeschi,et al.  Perceptual anchoring via conceptual spaces , 2004, AAAI 2004.

[5]  Richard Cubek,et al.  Learning and Application of High-Level Concepts with Conceptual Spaces and PDDL , 2011 .

[6]  Bernhard Nebel,et al.  The FF Planning System: Fast Plan Generation Through Heuristic Search , 2011, J. Artif. Intell. Res..

[7]  Peter Gärdenfors,et al.  Conceptual spaces - the geometry of thought , 2000 .

[8]  Rüdiger Dillmann,et al.  Automatic robot programming from learned abstract task knowledge , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Elena Deza,et al.  Encyclopedia of Distances , 2014 .

[10]  Janet Aisbett,et al.  A general formulation of conceptual spaces as a meso level representation , 2001, Artif. Intell..

[11]  Ignazio Infantino,et al.  Learning high-level tasks through imitation , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Tim Oates,et al.  Learning in Worlds with Objects , 2017, Encyclopedia of Machine Learning and Data Mining.

[13]  Stefan Schaal,et al.  Robot Programming by Demonstration , 2009, Springer Handbook of Robotics.

[14]  Hirokazu Kato,et al.  Marker tracking and HMD calibration for a video-based augmented reality conferencing system , 1999, Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR'99).

[15]  Manuel Lopes,et al.  Active Learning for Teaching a Robot Grounded Relational Symbols , 2013, IJCAI.

[16]  Hans-Peter Kriegel,et al.  Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering , 2009, TKDD.

[17]  John R. Searle,et al.  Minds, brains, and programs , 1980, Behavioral and Brain Sciences.

[18]  Hans-Peter Kriegel,et al.  Density-Connected Subspace Clustering for High-Dimensional Data , 2004, SDM.

[19]  Maria Fox,et al.  PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains , 2003, J. Artif. Intell. Res..

[20]  Danica Kragic,et al.  Robot Learning from Demonstration: A Task-level Planning Approach , 2008 .