Mulitagent Reinforcement Learning in Stochastic Games with Continuous Action Spaces

We investigate the learning problem in stochastic games with continuous action spaces. We focus on repeated normal form games, and discuss issues in modelling mixed strategies and adapting learning algorithms in finite-action games to the continuous-action domain. We applied variable resolution techniques to two simple multi-agent reinforcement learning algorithms PHC and MinimaxQ. Preliminary experiments shows that our variable resolution partitioning method is successful at identifying important regions of the action space while keeping the total number of partitions low.