Learning the Affordances of Tools Using a Behavior-Grounded Approach

This paper introduces a behavior-grounded approach to representing and learning the affordances of tools by a robot. The affordance representation is learned during a behavioral babbling stage in which the robot randomly chooses different exploratory behaviors, applies them to the tool, and observes their effects on environmental objects. As a result of this exploratory procedure, the tool representation is grounded in the behavioral and perceptual repertoire of the robot. Furthermore, the representation is autonomously testable and verifiable by the robot as it is expressed in concrete terms (i.e., behaviors) that are directly available to the robot's controller. The tool representation described here can also be used to solve tool-using tasks by dynamically sequencing the exploratory behaviors which were used to explore the tool based on their expected outcomes. The quality of the learned representation was tested on extension-of-reach tasks with rigid tools.

[1]  K. Pribram,et al.  Learning as self-organization , 1996 .

[2]  Ruzena Bajcsy,et al.  Identification of functional features through observations and interactions , 1995 .

[3]  D. Stork Generic object recognition using form & function , 1996 .

[4]  D. Povinelli Folk physics for apes : the chimpanzee's theory of how the world works , 2003 .

[5]  Ruzena Bajcsy,et al.  Interactive Recognition and Representation of Functionality , 1995, Comput. Vis. Image Underst..

[6]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[7]  E. Visalberghi,et al.  Tool use in capuchin monkeys: Distinguishing between performing and understanding , 1989, Primates.

[8]  James Park,et al.  The Brain's Sense of Movement , 2003, The Yale Journal of Biology and Medicine.

[9]  A. Iriki,et al.  Acquisition and development of monkey tool-use: behavioral and kinematic analyses. , 2000, Canadian journal of physiology and pharmacology.

[10]  Azriel Rosenfeld,et al.  Recognition by Functional Parts , 1995, Comput. Vis. Image Underst..

[11]  R. Y. Tsai,et al.  An Efficient and Accurate Camera Calibration Technique for 3D Machine Vision , 1986, CVPR 1986.

[12]  Robert C. Moore,et al.  Formal Theories of the Commonsense World , 1985 .

[13]  W. Köhler The mentality of apes, 1917. , 1948 .

[14]  E. Menzel Animal Tool Behavior: The Use and Manufacture of Tools by Animals, Benjamin B. Beck. Garland STPM Press, New York and London (1980), 306, Price £24.50 , 1981 .

[15]  Richard S. Sutton,et al.  Predictive Representations of State , 2001, NIPS.

[16]  T. Power Play and Exploration in Children and Animals , 1999 .

[17]  Matthew T. Mason,et al.  Mechanics of Robotic Manipulation , 2001 .

[18]  Giulio Sandini,et al.  Learning about objects through action - initial steps towards artificial cognition , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[19]  Ferdinando A. Mussa-Ivaldi,et al.  Evidence for a specific internal representation of motion–force relationships during object manipulation , 2003, Biological Cybernetics.

[20]  Roger Y. Tsai,et al.  A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses , 1987, IEEE J. Robotics Autom..

[21]  Ronald C. Arkin,et al.  An Behavior-based Robotics , 1998 .

[22]  Peter Stone,et al.  Learning Predictive State Representations , 2003, ICML.

[23]  David G. Stork,et al.  Generic object recognition using form and function , 1998, Pattern Analysis and Applications.

[24]  Patrick J. Hayes,et al.  The second naive physics manifesto , 1995 .