Decentralized Solutions and Tactics for RTS

Decision-theoretic control of multiple units in game AI [3, 5] is a notoriously hard problem, because the size of the state and action spaces are exponential in the number of agents, making it an interesting testbed for decision theoretic learning algorithms. There are two main approaches: a centralized and decentralized approach. In the centralized approach, a higher authority governs all agents’ actions. However, this approach scales poorly because of the large state and action spaces. In the decentralized approach, each agent selects its own action independently. This has the advantage of reducing the action space, and can even reduce the state space when some features are not relevant for all agents. However, this comes at the expense of losing optimality guarantees. In this research, we use the BroodWar API 1 for the real-time strategy (RTS) game StarCraft (see Figure 1) to micro-manage multiple units in a battle simulation with game AI. More specifically, our problem is a partially observable stochastic game (POSG) consisting of two separate, homogeneous groups of units in a zero-sum game. Each group’s objective is to defeat the other group in battle. The field of battle is partially obscured to the units, as each agent has a visual range. There is perfect cost-free communication between agents on the same team, but because the field is large the team as a whole cannot observe the whole battlefield. We use the decentralized approach and a novel state representation, and since POSGs are NEXPcomplete [1] we need to make simplifying assumptions to keep the problem tractable. We apply modelbased reinforcement learning, using Monte-Carlo sampling to learn the transition and reward functions, which leads to good coordination and strong performance.