Memory efficient factored abstraction for reinforcement learning

Classical reinforcement learning techniques are often inadequate for problems with large state-space due to curse of dimensionality. If the states can be represented as a set of variables, it is possible to model the environment more compactly. Automatic detection and use of temporal abstractions during learning was proven to be effective to increase learning speed. In this paper, we propose a factored automatic temporal abstraction method based on an existing temporal abstraction strategy, namely extended sequence tree algorithm, by taking care of state differences via state variable changes. The proposed method has been shown to provide significant memory gain on selected benchmark problems.