Purposive behavior acquisition for a real robot by vision-based reinforcement learning

This paper presents a method of vision-based reinforcement learning by which a robot learns to shoot a ball into a goal. We discuss several issues in applying the reinforcement learning method to a real robot with vision sensor by which the robot can obtain information about the changes in an environment. First, we construct a state space in terms of size, position, and orientation of a ball and a goal in an image, and an action space is designed in terms of the action commands to be sent to the left and right motors of a mobile robot. This causes a “state-action deviation” problem in constructing the state and action spaces that reflect the outputs from physical sensors and actuators, respectively. To deal with this issue, an action set is constructed in a way that one action consists of a series of the same action primitive which is successively executed until the current state changes. Next, to speed up the learning time, a mechanism of Learning from Easy Missions (or LEM) is implemented. LEM reduces the learning time from exponential to almost linear order in the size of the state space. The results of computer simulations and real robot experiments are given.

[1]  C. Watkins Learning from delayed rewards , 1989 .

[2]  Dana H. Ballard,et al.  Active Perception and Reinforcement Learning , 1990, Neural Computation.

[3]  Leslie Pack Kaelbling,et al.  Input Generalization in Delayed Reinforcement Learning: An Algorithm and Performance Comparisons , 1991, IJCAI.

[4]  Steven D. Whitehead,et al.  A Complexity Analysis of Cooperative Mechanisms in Reinforcement Learning , 1991, AAAI.

[5]  Sridhar Mahadevan,et al.  Automatic Programming of Behavior-Based Robots Using Reinforcement Learning , 1991, Artif. Intell..

[6]  Sridhar Mahadevan,et al.  Robot Learning , 1993 .

[7]  Leslie Pack Kaelbling,et al.  Learning to Achieve Goals , 1993, IJCAI.

[8]  Masayuki Inaba,et al.  Remote-Brained Robotics : Interfacing AI with Real World Behaviors , 1993 .

[9]  Sridhar Mahadevan,et al.  Rapid Task Learning for Real Robots , 1993 .

[10]  Dean A. Pomerleau,et al.  Knowledge-Based Training of Artificial Neural Networks for Autonomous Robot Driving , 1993 .

[11]  George A. Bekey,et al.  A reinforcement-learning approach to reactive control policy design for autonomous robots , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[12]  Maja J. Mataric,et al.  Reward Functions for Accelerated Learning , 1994, ICML.

[13]  Fuminori Saito,et al.  Learning architecture for real robotic systems-extension of connectionist Q-learning for continuous robot control domain , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[14]  Ben J. A. Kröse,et al.  Learning from delayed rewards , 1995, Robotics Auton. Syst..

[15]  Long Ji Lin,et al.  Self-improving reactive agents based on reinforcement learning, planning and teaching , 1992, Machine Learning.