The problem of deriving joint policies for a group of agents that maximze some joint reward function can be modelled as a decentralized partially observable Markov decision process (DEC-POMDP). Significant algorithms have been developed for single agent POMDPs however, with a few exceptions, effective algorithms for deriving policies for decentralized POMDPS have not been developed. As a first step, we present new algorithms for solving decentralized POMDPs. In particular, we describe an exhaustive search algorithm for a globally optimal solution and analyze the complexity of this algorithm, which we find to be doubly exponential in the number of agents and time, highlighting the importance of more feasible approximations. We define a class of algorithms which we refer to as “Joint Equilibrium-based Search for Policies”(JESP) and describe an exhaustive algorithm and a dynamic programming algorithm for JESP. Finaly, we empirically compare the exhaustive JESP algorithm with the globally optimal exhaustive algorithm.
[1]
John N. Tsitsiklis,et al.
The Complexity of Markov Decision Processes
,
1987,
Math. Oper. Res..
[2]
Leslie Pack Kaelbling,et al.
Acting Optimally in Partially Observable Stochastic Domains
,
1994,
AAAI.
[3]
Kee-Eung Kim,et al.
Learning to Cooperate via Policy Search
,
2000,
UAI.
[4]
Milind Tambe,et al.
Team Formation for Reformation in Multiagent Domains Like RoboCupRescue
,
2002,
RoboCup.
[5]
Milind Tambe,et al.
Multiagent teamwork: analyzing the optimality and complexity of key theories and models
,
2002,
AAMAS '02.
[6]
François Charpillet,et al.
A heuristic approach for solving decentralized-POMDP: assessment on the pursuit problem
,
2002,
SAC '02.