In this paper we divide multi-agent policies into two categories: centralized ones and decentralized ones. They reflect different views of multi-agent systems and different decision-theoretic underpinnings. While the centralized policies specify the decision of the agents according to the global system state, the decentralized policies, which correspond to the decisions of situated agents, must assume only a partial knowledge of the system in each agent and must deal with communication explicitly. In this paper we relate these two types of policies by introducing a formal and systematic methodology for transforming centralized policies into a variety of decentralized policies. We introduce a set of transformation strategies, and provide a representation for discussing decentralized communication decisions. Through our experiments, we show that our methodology enables us to derive a class of interesting policies that have a range of expected utilities and amount of communication, and allows us to gain important insights into decentralized coordination strategies from a decision-theoretic perspective.
[1]
Martin L. Puterman,et al.
Markov Decision Processes: Discrete Stochastic Dynamic Programming
,
1994
.
[2]
Victor Lesser,et al.
Uncertainty handling and decision making in multi-agent cooperation
,
2002
.
[3]
Hector J. Levesque,et al.
On Acting Together
,
1990,
AAAI.
[4]
Craig Boutilier,et al.
Sequential Optimality and Coordination in Multiagent Systems
,
1999,
IJCAI.
[5]
Victor R. Lesser,et al.
Communication decisions in multi-agent cooperation: model and experiments
,
2001,
AGENTS '01.
[6]
Neil Immerman,et al.
The Complexity of Decentralized Control of Markov Decision Processes
,
2000,
UAI.
[7]
John N. Tsitsiklis,et al.
The Complexity of Markov Decision Processes
,
1987,
Math. Oper. Res..
[8]
Victor R. Lesser,et al.
Quantitative Modeling of Complex Computational Task Environments
,
1993,
AAAI.