Multi-Agent Common Knowledge Reinforcement Learning

Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents an independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree. Higher levels in the hierarchy coordinate groups of agents by conditioning on their common knowledge, or delegate to lower levels with smaller subgroups but potentially richer common knowledge. The entire policy tree can be executed in a fully decentralised fashion. As the lowest policy tree level consists of independent policies for each agent, MACKRL reduces to independently learnt decentralised policies as a special case. We demonstrate that our method can exploit common knowledge for superior performance on complex decentralised coordination tasks, including a stochastic matrix game and challenging problems in StarCraft II unit micromanagement.

[1]  Kazuyuki Aihara,et al.  Multi-agent reinforcement learning algorithm to handle beliefs of other agents' policies and embedded beliefs , 2006, AAMAS '06.

[2]  Erfu Yang,et al.  Multiagent Reinforcement Learning for Multi-Robot Systems: A Survey , 2004 .

[3]  Ariel Rubinstein,et al.  A Course in Game Theory , 1995 .

[4]  Hung Manh La,et al.  Cooperative and Distributed Reinforcement Learning of Drones for Field Coverage , 2018, ArXiv.

[5]  Varun Jampani,et al.  Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[6]  Rohit Parikh,et al.  Probabilistic knowledge and probabilistic common knowledge , 1991 .

[7]  Tom Schaul,et al.  StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.

[8]  Jonathan P. How,et al.  Deep Decentralized Multi-task Multi-Agent RL under Partial Observability , 2017 .

[9]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[10]  Noa Agmon,et al.  Ad hoc teamwork for leading a flock , 2013, AAMAS.

[11]  F. Heider,et al.  An experimental study of apparent behavior , 1944 .

[12]  Emil Gustavsson,et al.  Learning to Play Guess Who? and Inventing a Grounded Language as a Consequence , 2016, ArXiv.

[13]  Jonathan P. How,et al.  Learning for multi-robot cooperation in partially observable stochastic environments with macro-actions , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[14]  Nicolas Usunier,et al.  Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks , 2016, ArXiv.

[15]  A. Rubinstein The Electronic Mail Game: Strategic Behavior Under "Almost Common Knowledge" , 1989 .

[16]  Peter Stone,et al.  Reasoning about Hypothetical Agent Behaviours and their Parameters , 2017, AAMAS.

[17]  Jonathan P. How,et al.  Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability , 2017, ICML.

[18]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[19]  Nikos A. Vlassis,et al.  Sparse cooperative Q-learning , 2004, ICML.

[20]  Shobha Venkataraman,et al.  Context-specific multiagent coordination and planning with factored MDPs , 2002, AAAI/IAAI.

[21]  John N. Tsitsiklis,et al.  Actor-Critic Algorithms , 1999, NIPS.

[22]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[23]  John K. Tsotsos,et al.  Agreeing to cross: How drivers and pedestrians communicate , 2017, 2017 IEEE Intelligent Vehicles Symposium (IV).

[24]  Jun Wang,et al.  Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games , 2017, ArXiv.

[25]  Ronald L. Rivest,et al.  The Optimality of Correlated Sampling , 2016, Electron. Colloquium Comput. Complex..

[26]  Boi Faltings,et al.  Decentralized Anti-coordination Through Multi-agent Learning , 2013, J. Artif. Intell. Res..

[27]  Piotr J. Gmytrasiewicz,et al.  Interactive POMDPs with finite-state models of other agents , 2017, Autonomous Agents and Multi-Agent Systems.

[28]  Rob Fergus,et al.  Learning Multiagent Communication with Backpropagation , 2016, NIPS.

[29]  Sridhar Mahadevan,et al.  Hierarchical multi-agent reinforcement learning , 2001, AGENTS '01.

[30]  Bikramjit Banerjee,et al.  Multi-agent reinforcement learning as a rehearsal for decentralized planning , 2016, Neurocomputing.

[31]  Wenwu Yu,et al.  An Overview of Recent Progress in the Study of Distributed Multi-Agent Coordination , 2012, IEEE Transactions on Industrial Informatics.

[32]  Dilek Z. Hakkani-Tür,et al.  Federated Control with Hierarchical Multi-Agent Deep Reinforcement Learning , 2017, ArXiv.

[33]  Mykel J. Kochenderfer,et al.  Cooperative Multi-agent Control Using Deep Reinforcement Learning , 2017, AAMAS Workshops.

[34]  Chunhui Zhao,et al.  Multi-vehicle Flocking Control with Deep Deterministic Policy Gradient Method , 2018, 2018 IEEE 14th International Conference on Control and Automation (ICCA).

[35]  Thomas Holenstein,et al.  Parallel repetition: simplifications and the no-signaling case , 2007, STOC '07.

[36]  Guy Lever,et al.  Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward , 2018, AAMAS.

[37]  Steven Pinker,et al.  The psychology of coordination and common knowledge. , 2014, Journal of personality and social psychology.

[38]  Stephen F. Smith,et al.  A few good agents: multi-agent social learning , 2008, AAMAS.

[39]  Florian Richoux,et al.  TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games , 2016, ArXiv.

[40]  Ashutosh Nayyar,et al.  Decentralized Stochastic Control with Partial History Sharing: A Common Information Approach , 2012, IEEE Transactions on Automatic Control.

[41]  R. Aumann Subjectivity and Correlation in Randomized Strategies , 1974 .

[42]  Craig Boutilier,et al.  Sequential Optimality and Coordination in Multiagent Systems , 1999, IJCAI.

[43]  Peter Stone,et al.  Three years of the RoboCup standard platform league drop-in player competition , 2016, Autonomous Agents and Multi-Agent Systems.

[44]  Wojciech Zaremba,et al.  Domain randomization for transferring deep neural networks from simulation to the real world , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[45]  Tom Schaul,et al.  FeUdal Networks for Hierarchical Reinforcement Learning , 2017, ICML.

[46]  Joseph Y. Halpern,et al.  Knowledge and common knowledge in a distributed environment , 1984, JACM.

[47]  Craig Boutilier,et al.  The Dynamics of Reinforcement Learning in Cooperative Multiagent Systems , 1998, AAAI/IAAI.

[48]  Shimon Whiteson,et al.  Learning to Communicate with Deep Multi-Agent Reinforcement Learning , 2016, NIPS.

[49]  Ronen I. Brafman,et al.  Learning to Coordinate Efficiently: A Model-based Approach , 2003, J. Artif. Intell. Res..

[50]  Haitham Bou-Ammar,et al.  Learning to Communicate Implicitly by Actions , 2018, AAAI.

[51]  Peng Peng,et al.  Multiagent Bidirectionally-Coordinated Nets: Emergence of Human-level Coordination in Learning to Play StarCraft Combat Games , 2017, 1703.10069.

[52]  Shimon Whiteson,et al.  Counterfactual Multi-Agent Policy Gradients , 2017, AAAI.

[53]  Yi Wu,et al.  Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments , 2017, NIPS.

[54]  Sarit Kraus,et al.  Empirical evaluation of ad hoc teamwork in the pursuit domain , 2011, AAMAS.

[55]  Nikos A. Vlassis,et al.  Optimal and Approximate Q-value Functions for Decentralized POMDPs , 2008, J. Artif. Intell. Res..

[56]  Shimon Whiteson,et al.  The StarCraft Multi-Agent Challenge , 2019, AAMAS.

[57]  Shimon Whiteson,et al.  QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning , 2018, ICML.

[58]  Leslie Pack Kaelbling,et al.  Planning with macro-actions in decentralized POMDPs , 2014, AAMAS.

[59]  Madhav V. Marathe,et al.  Collective action through common knowledge using a facebook model , 2014, AAMAS.

[60]  Thomas G. Dietterich Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition , 1999, J. Artif. Intell. Res..

[61]  Bart De Schutter,et al.  A Comprehensive Survey of Multiagent Reinforcement Learning , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[62]  Carlos Guestrin,et al.  Multiagent Planning with Factored MDPs , 2001, NIPS.

[63]  Shimon Whiteson,et al.  Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning , 2017, ICML.