Integration of Self-Organizing Feature Maps and Reinforcement Learning in Robotics

In this paper we describe a hybrid approach to solve a real-world robotic task with uncertainty. The solution is based on the integration of unsupervised learning of task features and reinforcement learning of the correspondence between situations and actions. We seek for inspiration in the behavior of people performing manipulation tasks. The proposed approach clearly separates the programmed skills from the learned knowledge. A real-world example is presented which shows how the robot, starting from a pure random strategy, improves its performance and becomes more skillful with the task.

[1]  Michael A. Erdmann Randomization in Robot Tasks , 1992 .

[2]  Armando Freitas da Rocha,et al.  Neural Nets , 1992, Lecture Notes in Computer Science.

[3]  Alain Jutard,et al.  A Random Exploration Approach for Automatic Chamferless Insertion , 1995, Int. J. Robotics Res..

[4]  Ron Sun,et al.  Computational Architectures Integrating Neural And Symbolic Processes , 1994 .

[5]  Alfred Ultsch,et al.  Self Organized Feature Maps for Monitoring and Knowledge Aquisition of a Chemical Process , 1993 .

[6]  Angel P. del Pobil,et al.  Perception-based learning for motion in contact in task planning , 1996, J. Intell. Robotic Syst..

[7]  Angel P. del Pobil,et al.  A sensor-based approach for motion in contact in task planning , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[8]  V. Gullapalli,et al.  Acquiring robot skills via reinforcement learning , 1994, IEEE Control Systems.