Using Temporal Neighborhoods to Adapt Function Approximators in Reinforcement Learning

To avoid the curse of dimensionality, function approximators are used in reinforcement learning to learn value functions for individual states. In order to make better use of computational resources (basis functions) many researchers are investigating ways to adapt the basis functions during the learning process so that they better fit the value-function landscape. Here we introduce temporal neighborhoods as small groups of states that experience frequent intra-group transitions during on-line sampling. We then form basis functions along these temporal neighborhoods. Empirical evidence is provided which demonstrates the effectiveness of this scheme. We discuss a class of RL problems for which this method might be plausible.