Lexicon acquisition based on object-oriented behavior learning

Studies on lexicon acquisition systems are gaining attention in the search for a natural human–robot interface and a test environment to model the infant lexicon acquisition process. Although various lexicon acquisition systems that ground words to sensory experience have been developed, existing systems have clear limitations on the ability to autonomously associate words to objects. This limitation is due to the fact that categories for words are formed in a passive manner, either by teaching of caregivers or finding similarities in visual features. This paper presents a system for lexicon acquisition through behavior learning. Based on a modified multi-module reinforcement learning system, the robot is able to automatically associate words to objects with various visual features based on similarities in affordances or in functions. The system was implemented on a mobile robot acquiring a lexicon related to different rolling preferences. The experimental results are given and future issues are discussed.

[1]  M.H. Hassoun,et al.  Fundamentals of Artificial Neural Networks , 1996, Proceedings of the IEEE.

[2]  D M Wolpert,et al.  Multiple paired forward and inverse models for motor control , 1998, Neural Networks.

[3]  Kenji Doya,et al.  Symbolization and Imitation Learning of Motion Sequence Using Competitive Modules , 2006 .

[4]  Harumi Kobayashi,et al.  The influence of adults' actions on children's inferences about word meanings , 1999 .

[5]  G. Lakoff,et al.  Metaphors We Live By , 1980 .

[6]  M. Goodale,et al.  The visual brain in action , 1995 .

[7]  A. Cangelosi,et al.  Symbol grounding and the symbolic theft hypothesis , 2002 .

[8]  Nell K. Duke,et al.  Two-year-olds will name artifacts by their functions. , 2000, Child development.

[9]  Tetsuo Ono,et al.  Robovie: an interactive humanoid robot , 2001 .

[10]  Tetsuo Sawaragi,et al.  Assimilation and accommodation for self-organizational learning of autonomous robots: proposal of dual-schemata model , 2003, Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation. Computational Intelligence in Robotics and Automation for the New Millennium (Cat. No.03EX694).

[11]  Hiroaki Kitano,et al.  Development of an Autonomous Quadruped Robot for Robot Entertainment , 1998, Auton. Robots.

[12]  G. Lakoff,et al.  Metaphors We Live by , 1982 .

[13]  Giorgio Metta,et al.  Better Vision through Manipulation , 2003, Adapt. Behav..

[14]  Giorgio Metta,et al.  Early integration of vision and manipulation , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[15]  Peter Dayan,et al.  Technical Note: Q-Learning , 2004, Machine Learning.

[16]  L. Steels,et al.  coordinating perceptually grounded categories through language: a case study for colour , 2005, Behavioral and Brain Sciences.

[17]  J. Gibson The Ecological Approach to Visual Perception , 1979 .

[18]  Luc Steels,et al.  Aibo''s first words. the social learning of language and meaning. Evolution of Communication , 2002 .

[19]  Jun Tani,et al.  Learning Semantic Combinatoriality from the Interaction between Linguistic and Behavioral Processes , 2005, Adapt. Behav..

[20]  Peter Dayan,et al.  Q-learning , 1992, Machine Learning.

[21]  Tetsuo Sawaragi,et al.  Self-organization of inner symbols for chase: symbol organization and embodiment , 2004, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583).

[22]  Tetsuya Ogata,et al.  Extracting multi-modal dynamics of objects using RNNPB , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  E. Rolls High-level vision: Object recognition and visual cognition, Shimon Ullman. MIT Press, Bradford (1996), ISBN 0 262 21013 4 , 1997 .