We present a Monte-Carlo simulation algorithm for real-time policy improvement of an adaptive controller. In the Monte-Carlo simulation, the long-term expected reward of each possible action is statistically measured, using the initial policy to make decisions in each step of the simulation. The action maximizing the measured expected reward is then taken, resulting in an improved policy. Our algorithm is easily parallelizable and has been implemented on the IBM SP1 and SP2 parallel-RISC supercomputers.
We have obtained promising initial results in applying this algorithm to the domain of backgammon. Results are reported for a wide variety of initial policies, ranging from a random policy to TD-Gammon, an extremely strong multi-layer neural network. In each case, the Monte-Carlo algorithm gives a substantial reduction, by as much as a factor of 5 or more, in the error rate of the base players. The algorithm is also potentially useful in many other adaptive control applications in which it is possible to simulate the environment.
[1]
Claude E. Shannon,et al.
Programming a computer for playing chess
,
1950
.
[2]
Gerald Tesauro,et al.
Connectionist Learning of Expert Preferences by Comparison Training
,
1988,
NIPS.
[3]
Gerald Tesauro,et al.
Temporal Difference Learning and TD-Gammon
,
1995,
J. Int. Comput. Games Assoc..
[4]
Thomas G. Dietterich,et al.
High-Performance Job-Shop Scheduling With A Time-Delay TD(λ) Network
,
1995,
NIPS 1995.
[5]
Andrew G. Barto,et al.
Improving Elevator Performance Using Reinforcement Learning
,
1995,
NIPS.
[6]
Dimitri P. Bertsekas,et al.
Dynamic Programming and Optimal Control, Two Volume Set
,
1995
.
[7]
Thomas G. Dietterich,et al.
High-Performance Job-Shop Scheduling With A Time-Delay TD-lambda Network
,
1995,
NIPS.