Autonomous Development of a Grounded Object Ontology by a Learning Robot

We describe how a physical robot can learn about objects from its own autonomous experience in the continuous world. The robot identifies statistical regularities that allow it to represent a physical object with a cluster of sensations that violate a static world model, track that cluster over time, extract percepts from that cluster, form concepts from similar percepts, and learn reliable actions that can be applied to objects. We present a formalism for representing the ontology for objects and actions, a learning algorithm, and the results of an evaluation with a physical robot.

[1]  Alex Pentland,et al.  Learning words from sights and sounds: a computational model , 2002, Cogn. Sci..

[2]  P. Bloom Précis of How Children Learn the Meanings of Words , 2001, Behavioral and Brain Sciences.

[3]  Yoonsuck Choe,et al.  Motion-Based Autonomous Grounding: Inferring External World Properties from Encoded Internal Sensory States Alone , 2006, AAAI.

[4]  Benjamin Kuipers,et al.  Bootstrap learning for object discovery , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[5]  J. Kevin O'Regan,et al.  Is There Something Out There? Inferring Space from Sensorimotor Dependencies , 2003, Neural Computation.

[6]  Wolfram Burgard,et al.  Probabilistic Robotics (Intelligent Robotics and Autonomous Agents) , 2005 .

[7]  Lorenzo Natale,et al.  Linking Action to Perception in a Humanoid Robot: a Developmental Approach to Grasping , 2004 .

[8]  Wei-Min Shen,et al.  Functional transformations in AI discovery systems , 1988, [1988] Proceedings of the Twenty-First Annual Hawaii International Conference on System Sciences. Volume III: Decision Support and Knowledge Based Systems Track.

[9]  Luc Steels,et al.  Aibo''s first words. the social learning of language and meaning. Evolution of Communication , 2002 .

[10]  Chen Yu,et al.  The Role of Embodied Intention in Early Lexical Acquisition , 2005, Cogn. Sci..

[11]  Alexander Stoytchev,et al.  Behavior-Grounded Representation of Tool Affordances , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[12]  Richard T. Vaughan,et al.  The Player/Stage Project: Tools for Multi-Robot and Distributed Sensor Systems , 2003 .

[13]  Benjamin Kuipers,et al.  Autonomous shape model learning for object localization and recognition , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[14]  Stephen Hart,et al.  A Relational Representation for Procedural Task Knowledge , 2005, AAAI.

[15]  Mark Steedman Formalizing Affordance , 2019, Proceedings of the Twenty-Fourth Annual Conference of the Cognitive Science Society.

[16]  Chrystopher L. Nehaniv,et al.  From unknown sensors and actuators to actions grounded in sensorimotor perceptions , 2006, Connect. Sci..

[17]  Jean M. Mandler,et al.  A summary of The foundations of mind: Origins of conceptual thought , 2004 .

[18]  Benjamin Kuipers,et al.  Map Learning with Uninterpreted Sensors and Effectors , 1995, Artif. Intell..

[19]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[20]  Scott Benson,et al.  Inductive Learning of Reactive Action Models , 1995, ICML.

[21]  Scott Sanner,et al.  Towards object mapping in non-stationary environments with mobile robots , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[22]  Elizabeth S. Spelke,et al.  Principles of Object Perception , 1990, Cogn. Sci..

[23]  Pietro Perona,et al.  A Bayesian approach to unsupervised one-shot learning of object categories , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[24]  Leslie Pack Kaelbling,et al.  Learning Planning Rules in Noisy Stochastic Worlds , 2005, AAAI.