Scaling Up Reinforcement Learning with a Relational Representation

Reinforcement learning has been repeatedly suggested as good candidate for learning in robotics. However, the large search spaces normally occurring robotics and expensive training experiences required by reinforcement learning algorithms has hampered its applicability. This paper introduces a new approach for reinforcement learning based on a relational representation which: (i) can be applied over large search spaces, (ii) can incorporate domain knowledge, and (iii) can use previously learned policies on different, although similar, problems. In the proposed framework states are represented as sets of first order relations, actions in terms of those relations, and policies are learned over such generalized representation. It is shown how this representation can capture large search spaces with a relatively small set of actions and states, and that policies learned over this generalized representation can be directly apply to other problems which can be characterized by the same set of