Where Do Actions Come From? Autonomous Robot Learning of Objects and Actions

Decades of AI research have yielded techniques for learning, inference, and planning that depend on human-provided ontologies of self, space, time, objects, actions, and properties. Since robots are constructed with low-level sensor and motor interfaces that do not provide these concepts, the human robotics researcher must create the bindings between the required high-level concepts and the available low-level interfaces. This raises the developmental learning problem for robots of how a learning agent can create high-level concepts from its own low-level experience. Prior work has shown how objects can be individuated from low-level sensation, and certain properties can be learned for individual objects. This work shows how high-level actions can be learned autonomously by searching for control laws that reliably change these properties in predictable ways. We present a robust and efficient algorithm that creates reliable control laws for perceived objects. We demonstrate on a physical robot how these high-level actions can be learned from the robot’s own experiences, and can then applied to a learned object to achieve a desired goal.

[1]  Elizabeth S. Spelke,et al.  Principles of Object Perception , 1990, Cogn. Sci..

[2]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[3]  Scott Sanner,et al.  Towards object mapping in non-stationary environments with mobile robots , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Benjamin Kuipers,et al.  The Spatial Semantic Hierarchy , 2000, Artif. Intell..

[5]  Pat Langley,et al.  Learning Recursive Control Programs from Problem Solving , 2006, J. Mach. Learn. Res..

[6]  Benjamin Kuipers,et al.  Map Learning with Uninterpreted Sensors and Effectors , 1995, Artif. Intell..

[7]  Alexander Stoytchev,et al.  Behavior-Grounded Representation of Tool Affordances , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[8]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[9]  J. Kevin O'Regan,et al.  Is There Something Out There? Inferring Space from Sensorimotor Dependencies , 2003, Neural Computation.

[10]  Benjamin Kuipers,et al.  Autonomous shape model learning for object localization and recognition , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[11]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[12]  Stephen Hart,et al.  A Relational Representation for Procedural Task Knowledge , 2005, AAAI.

[13]  Benjamin Kuipers,et al.  Bootstrap learning for object discovery , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[14]  Pierre-Yves Oudeyer,et al.  The Playground Experiment: Task-Independent Development of a Curious Robot , 2005 .

[15]  Jean M. Mandler A summary of The foundations of mind: Origins of conceptual thought , 2004 .

[16]  Nuttapong Chentanez,et al.  Intrinsically Motivated Learning of Hierarchical Collections of Skills , 2004 .

[17]  Andrew W. Moore,et al.  Locally Weighted Learning for Control , 1997, Artificial Intelligence Review.