Grounded spatial symbols for task planning based on experience

Providing autonomous humanoid robots with the abilities to react in an adaptive and intelligent manner involves low level control and sensing as well as high level reasoning. However, the integration of both levels still remains challenging due to the representational gap between the continuous state space on the sensorimotor level and the discrete symbolic entities used in high level reasoning. In this work, we approach the problem of learning a representation of the space which is applicable on both levels. This representation is grounded on the sensorimotor level by means of exploration and on the language level by making use of common sense knowledge. We demonstrate how spatial knowledge can be extracted from these two sources of experience. Combining the resulting knowledge in a systematic way yields a solution to the grounding problem which has the potential to substantially decrease the learning effort.

[1]  Ronald P. A. Petrick,et al.  University of Southern Denmark Object-Action Complexes : Grounded Abstractions of Sensorimotor Processes , 2011 .

[2]  Mark Steedman,et al.  Learning STRIPS Operators from Noisy and Incomplete Observations , 2012, UAI.

[3]  Yiannis Aloimonos,et al.  A Corpus-Guided Framework for Robotic Visual Perception , 2011, Language-Action Tools for Cognitive Artificial Agents.

[4]  Kai Welke Memory-Based Active Visual Search for Humanoid Robots , 2011 .

[5]  Erik T. Mueller,et al.  Open Mind Common Sense: Knowledge Acquisition from the General Public , 2002, OTM.

[6]  P. S. Maybeck,et al.  Cost-function-based gaussian mixture reduction for target tracking , 2003, Sixth International Conference of Information Fusion, 2003. Proceedings of the.

[7]  Fahiem Bacchus,et al.  A Knowledge-Based Approach to Planning with Incomplete Information and Sensing , 2002, AIPS.

[8]  Wolfram Burgard,et al.  From labels to semantics: an integrated system for conceptual spatial representations of indoor environments for mobile robots , 2007 .

[9]  Alessandro Saffiotti,et al.  Robot task planning using semantic maps , 2008, Robotics Auton. Syst..

[10]  Rüdiger Dillmann,et al.  The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics , 2012, Int. J. Robotics Res..

[11]  Michael Beetz,et al.  Learning and Reasoning with Action-Related Places for Robust Mobile Manipulation , 2014, J. Artif. Intell. Res..

[12]  Joachim Hertzberg,et al.  Semantic Scene Analysis of Scanned 3D Indoor Environments , 2003, VMV.

[13]  Rachid Alami,et al.  A Hybrid Approach to Intricate Motion, Manipulation and Task Planning , 2009, Int. J. Robotics Res..

[14]  Yiannis Aloimonos,et al.  Towards a Watson that sees: Language-guided action recognition for robots , 2012, 2012 IEEE International Conference on Robotics and Automation.

[15]  Christopher W. Geib,et al.  Object Action Complexes as an Interface for Planning and Robot Control , 2006 .

[16]  Christopher W. Geib,et al.  Representation and Integration: Combining Robot Control, High-Level Planning, and Action Learning , 2008 .

[17]  Raymond J. Mooney,et al.  Learning to Interpret Natural Language Navigation Instructions from Observations , 2011, Proceedings of the AAAI Conference on Artificial Intelligence.

[18]  A.R. Runnalls,et al.  A Kullback-Leibler Approach to Gaussian Mixture Reduction , 2007 .

[19]  James J. Little,et al.  Automated Spatial-Semantic Modeling with Applications to Place Labeling and Informed Search , 2009, 2009 Canadian Conference on Computer and Robot Vision.

[20]  Fahiem Bacchus,et al.  Extending the Knowledge-Based Approach to Planning with Incomplete Information and Sensing , 2004, ICAPS.

[21]  Yoav Goldberg,et al.  A Dataset of Syntactic-Ngrams over Time from a Very Large Corpus of English Books , 2013, *SEMEVAL.

[22]  V. Maz'ya,et al.  On approximate approximations using Gaussian kernels , 1996 .

[23]  Matthew R. Walter,et al.  Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation , 2011, AAAI.

[24]  Benjamin Kuipers,et al.  The Spatial Semantic Hierarchy , 2000, Artif. Intell..

[25]  Stuart J. Russell,et al.  Combined Task and Motion Planning for Mobile Manipulation , 2010, ICAPS.

[26]  Leslie Pack Kaelbling,et al.  Hierarchical task and motion planning in the now , 2011, 2011 IEEE International Conference on Robotics and Automation.

[27]  Barbara Caputo,et al.  Multi-modal Semantic Place Classification , 2010, Int. J. Robotics Res..

[28]  Christopher D. Manning,et al.  Stanford typed dependencies manual , 2010 .

[29]  Beatrice Santorini,et al.  Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.

[30]  Tamim Asfour,et al.  ARMAR-III: An Integrated Humanoid Platform for Sensory-Motor Control , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[31]  Erion Plaku Planning in Discrete and Continuous Spaces: From LTL Tasks to Robot Motions , 2012, TAROS.

[32]  Craig A. Knoblock,et al.  PDDL-the planning domain definition language , 1998 .

[33]  M. West Approximating posterior distributions by mixtures , 1993 .

[34]  Stefanie Tellex,et al.  Toward understanding natural language directions , 2010, HRI 2010.

[35]  Tamim Asfour,et al.  Toward humanoid manipulation in human-centred environments , 2008, Robotics Auton. Syst..