Multiagent Reinforcement Learning in Stochastic Games

We adopt stochastic games as a general framework for dynamic noncooperative systems. This framework provides a way of describing the dynamic interactions of agents in terms of individuals' Markov decision processes. By studying this framework, we go beyond the common practice in the study of learning in games, which primarily focus on repeated games or extensive-form games. For stochastic games with incomplete information, we design a multiagent reinforcement learning method which allows agents to learn Nash equilibrium strategies. We show in both theory and experiments that this algorithm converges. From the viewpoint of machine learning research, our work helps to establish the theoretical foundation for applying reinforcement learning, originally deened for single-agent systems, to multiagent systems.