Towards Computing Optimal Policies for Decentralized POMDPs

The problem of deriving joint policies for a group of agents that maximze some joint reward function can be modelled as a decentralized partially observable Markov decision process (DEC-POMDP). Significant algorithms have been developed for single agent POMDPs however, with a few exceptions, effective algorithms for deriving policies for decentralized POMDPS have not been developed. As a first step, we present new algorithms for solving decentralized POMDPs. In particular, we describe an exhaustive search algorithm for a globally optimal solution and analyze the complexity of this algorithm, which we find to be doubly exponential in the number of agents and time, highlighting the importance of more feasible approximations. We define a class of algorithms which we refer to as “Joint Equilibrium-based Search for Policies”(JESP) and describe an exhaustive algorithm and a dynamic programming algorithm for JESP. Finaly, we empirically compare the exhaustive JESP algorithm with the globally optimal exhaustive algorithm.