Learning to Communicate Using Counterfactual Reasoning

This paper introduces a new approach for multi-agent communication learning called multi-agent counterfactual communication (MACC) learning. Many real-world problems are currently tackled using multi-agent techniques. However, in many of these tasks the agents do not observe the full state of the environment but only a limited observation. This absence of knowledge about the full state makes completing the objectives significantly more complex or even impossible. The key to this problem lies in sharing observation information between agents or learning how to communicate the essential data. In this paper we present a novel multi-agent communication learning approach called MACC. It addresses the partial observability problem of the agents. MACC lets the agent learn the action policy and the communication policy simultaneously. We focus on decentralized Markov Decision Processes (Dec-MDP), where the agents have joint observability. This means that the full state of the environment can be determined using the observations of all agents. MACC uses counterfactual reasoning to train both the action and the communication policy. This allows the agents to anticipate on how other agents will react to certain messages and on how the environment will react to certain actions, allowing them to learn more effective policies. MACC uses actor-critic with a centralized critic and decentralized actors. The critic is used to calculate an advantage for both the action and communication policy. We demonstrate our method by applying it on the Simple Reference Particle environment of OpenAI and a MNIST game. Our results are compared with a communication and non-communication baseline. These experiments demonstrate that MACC is able to train agents for each of these problems with effective communication policies.

[1]  Siegfried Mercelis,et al.  Learning to Communicate with Multi-agent Reinforcement Learning Using Value-Decomposition Networks , 2019, 3PGCIC.

[2]  Rob Fergus,et al.  Learning Multiagent Communication with Backpropagation , 2016, NIPS.

[3]  Claudia V. Goldman,et al.  Optimizing information exchange in cooperative multi-agent systems , 2003, AAMAS '03.

[4]  Eduardo F. Morales,et al.  An Introduction to Reinforcement Learning , 2011 .

[5]  Guy Lever,et al.  Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward , 2018, AAMAS.

[6]  Zongqing Lu,et al.  Learning Individually Inferred Communication for Multi-Agent Cooperation , 2020, NeurIPS.

[7]  Michael I. Jordan,et al.  RLlib: Abstractions for Distributed Reinforcement Learning , 2017, ICML.

[8]  Shimon Whiteson,et al.  Learning to Communicate with Deep Multi-Agent Reinforcement Learning , 2016, NIPS.

[9]  Andrea Sgarro,et al.  Informational divergence and the dissimilarity of probability distributions , 1981 .

[10]  Nuno Lau,et al.  Multi-agent actor centralized-critic with communication , 2020, Neurocomputing.

[11]  Baher Abdulhai,et al.  Multiagent Reinforcement Learning for Integrated Network of Adaptive Traffic Signal Controllers (MARLIN-ATSC): Methodology and Large-Scale Application on Downtown Toronto , 2013, IEEE Transactions on Intelligent Transportation Systems.

[12]  Zongqing Lu,et al.  Learning Attentional Communication for Multi-Agent Cooperation , 2018, NeurIPS.

[13]  Ronald A. Howard,et al.  Dynamic Programming and Markov Processes , 1960 .

[14]  Tom Schaul,et al.  Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.

[15]  Frans A. Oliehoek,et al.  Dec-POMDPs with delayed communication , 2007 .

[16]  Joelle Pineau,et al.  TarMAC: Targeted Multi-Agent Communication , 2018, ICML.

[17]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[18]  Mahesan Niranjan,et al.  On-line Q-learning using connectionist systems , 1994 .

[19]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[20]  Frans A. Oliehoek,et al.  A Concise Introduction to Decentralized POMDPs , 2016, SpringerBriefs in Intelligent Systems.

[21]  David Silver,et al.  Deep Reinforcement Learning with Double Q-Learning , 2015, AAAI.

[22]  Yi Wu,et al.  Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments , 2017, NIPS.

[23]  Bart De Schutter,et al.  A Comprehensive Survey of Multiagent Reinforcement Learning , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[24]  Jun Wang,et al.  Multiagent Bidirectionally-Coordinated Nets for Learning to Play StarCraft Combat Games , 2017, ArXiv.

[25]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[26]  Shimon Whiteson,et al.  Counterfactual Multi-Agent Policy Gradients , 2017, AAAI.

[27]  Kurt Geihs,et al.  Hierarchical multi-agent deep reinforcement learning to develop long-term coordination , 2019, SAC.

[28]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[29]  Guy Lever,et al.  Deterministic Policy Gradient Algorithms , 2014, ICML.

[30]  Xiangyu Liu,et al.  ACCNet: Actor-Coordinator-Critic Net for "Learning-to-Communicate" with Deep Multi-agent Reinforcement Learning , 2017, ArXiv.

[31]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[32]  Pieter Abbeel,et al.  Emergence of Grounded Compositional Language in Multi-Agent Populations , 2017, AAAI.

[33]  Nando de Freitas,et al.  Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning , 2018, ICML.