We present a new method for learning good strategies in zero-sum Markov games in which each side is composed of multiple agents collaborating against an opposing team of agents. Our method requires full observability and communication during learning, but the learned policies can be executed in a distributed manner. The value function is represented as a factored linear architecture and its structure determines the necessary computational resources and communication bandwidth. This approach permits a tradeoff between simple representations with little or no communication between agents and complex, computationally intensive representations with extensive coordination between agents. Thus, we provide a principled means of using approximation to combat the exponential blowup in the joint action space of the participants. The approach is demonstrated with an example that shows the efficiency gains over naive enumeration.
[1]
Rina Dechter,et al.
Bucket Elimination: A Unifying Framework for Reasoning
,
1999,
Artif. Intell..
[2]
Michail G. Lagoudakis,et al.
Value Function Approximation in Zero-Sum Markov Games
,
2002,
UAI.
[3]
Michail G. Lagoudakis,et al.
Coordinated Reinforcement Learning
,
2002,
ICML.
[4]
Ronald E. Parr,et al.
Solving Factored POMDPs with Linear Value Functions
,
2001
.
[5]
Michael L. Littman,et al.
Markov Games as a Framework for Multi-Agent Reinforcement Learning
,
1994,
ICML.
[6]
Michail G. Lagoudakis,et al.
Model-Free Least-Squares Policy Iteration
,
2001,
NIPS.
[7]
Carlos Guestrin,et al.
Multiagent Planning with Factored MDPs
,
2001,
NIPS.