Improving Space Representation in Multiagent Learning via Tile Coding

Reinforcement learning is an efficient, widely used machine learning technique that performs well in problems that are characterized by a small number of states and actions. This is rarely the case in multiagent learning problems. For the multiagent case, standard approaches may not be adequate. As an alternative, it is possible to use techniques that generalize the state space to allow agents to learn through the use of abstractions. Thus, the focus of this work is to combine multiagent learning with a generalization technique, namely tile coding. This kind of method is key in scenarios where agents have a high number of states to explore. In the scenarios used to test and validate this approach, our results indicate that the proposed representation outperforms the tabular one and is then an effective alternative.