We present three ways of combining linear programming with the kernel trick to find value function approximations for reinforcement learning. One formulation is based on SVM regression; the second is based on the Bellman equation; and the third seeks only to ensure that good moves have an advantage over bad moves. All formulations attempt to minimize the number of support vectors while fitting the data. Experiments in a difficult, synthetic maze problem show that all three formulations give excellent performance, but the advantage formulation is much easier to train. Unlike policy gradient methods, the kernel methods described here can easily adjust the complexity of the function approximator to fit the complexity of the value function.
[1]
Paul E. Utgoff,et al.
Learning a Preference Predicate
,
1987
.
[2]
Gerald Tesauro,et al.
Temporal difference learning and TD-Gammon
,
1995,
CACM.
[3]
Wei Zhang,et al.
A Reinforcement Learning Approach to job-shop Scheduling
,
1995,
IJCAI.
[4]
Richard S. Sutton,et al.
Learning Instance-Independent Value Functions to Enhance Local Search
,
1998,
NIPS.
[5]
Vladimir N. Vapnik,et al.
The Nature of Statistical Learning Theory
,
2000,
Statistics for Engineering and Information Science.