A multiagent reinforcement learning algorithm by dynamically merging markov decision processes

One general strategy for accelerating the learning of cooperative multiagent problems is to reuse good or optimal solutions to the task when each agent is acting alone. In this paper, we formalize this approach as dynamically merging solutions to multiple Markov decision processes (MDPs), each representing an individual agent's solution when acting alone, to obtain solutions to the overall multiagent MDP when all the agents act together. We present a new learning algorithm called MAPLE (MultiAgent Policy LEarning) that uses Q-learning and dynamic merging to efficiently construct global solutions to the overall multiagent problem from solutions to the individual MDPs. We illustrate the efficiency of MAPLE by comparing its performance with standard Q-learning applied to the overall multiagent MDP.