SMAUG: A Sliding Multidimensional Task Window-Based MARL Framework for Adaptive Real-Time Subtask Recognition

Instead of making behavioral decisions directly from the exponentially expanding joint observational-action space, subtask-based multi-agent reinforcement learning (MARL) methods enable agents to learn how to tackle different subtasks. Most existing subtask-based MARL methods are based on hierarchical reinforcement learning (HRL). However, these approaches often limit the number of subtasks, perform subtask recognition periodically, and can only identify and execute a specific subtask within the predefined fixed time period, which makes them inflexible and not suitable for diverse and dynamic scenarios with constantly changing subtasks. To break through above restrictions, a \textbf{S}liding \textbf{M}ultidimensional t\textbf{A}sk window based m\textbf{U}ti-agent reinforcement learnin\textbf{G} framework (SMAUG) is proposed for adaptive real-time subtask recognition. It leverages a sliding multidimensional task window to extract essential information of subtasks from trajectory segments concatenated based on observed and predicted trajectories in varying lengths. An inference network is designed to iteratively predict future trajectories with the subtask-oriented policy network. Furthermore, intrinsic motivation rewards are defined to promote subtask exploration and behavior diversity. SMAUG can be integrated with any Q-learning-based approach. Experiments on StarCraft II show that SMAUG not only demonstrates performance superiority in comparison with all baselines but also presents a more prominent and swift rise in rewards during the initial training stage.

[1]  Zongzhang Zhang,et al.  Multi-Agent Concentrative Coordination with Decentralized Task Representation , 2022, IJCAI.

[2]  Fei Sha,et al.  ALMA: Hierarchical Learning for Composite Multi-Agent Tasks , 2022, NeurIPS.

[3]  Qianchuan Zhao,et al.  Celebrating Diversity in Shared Multi-Agent Reinforcement Learning , 2021, NeurIPS.

[4]  Shimon Whiteson,et al.  RODE: Learning Roles to Decompose Multi-Agent Tasks , 2020, ICLR.

[5]  Chongjie Zhang,et al.  QPLEX: Duplex Dueling Multi-Agent Q-Learning , 2020, ICLR.

[6]  Shimon Whiteson,et al.  Weighted QMIX: Expanding Monotonic Value Function Factorisation , 2020, NeurIPS.

[7]  Shimon Whiteson,et al.  Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning , 2020, J. Mach. Learn. Res..

[8]  Victor Lesser,et al.  ROMA: Multi-Agent Reinforcement Learning with Emergent Roles , 2020, ICML.

[9]  H. Zha,et al.  Hierarchical Cooperative Multi-Agent Reinforcement Learning with Skill Discovery , 2019, AAMAS.

[10]  Xiaoyun Zhang,et al.  Iteratively-Refined Interactive 3D Medical Image Segmentation With Multi-Agent Reinforcement Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Chongjie Zhang,et al.  Learning Nearly Decomposable Value Functions Via Communication Minimization , 2019, ICLR.

[12]  Yung Yi,et al.  QTRAN: Learning to Factorize with Transformation for Cooperative Multi-Agent Reinforcement Learning , 2019, ICML.

[13]  Shimon Whiteson,et al.  The StarCraft Multi-Agent Challenge , 2019, AAMAS.

[14]  Fei Sha,et al.  Actor-Attention-Critic for Multi-Agent Reinforcement Learning , 2018, ICML.

[15]  Gerhard Neumann,et al.  Guided Deep Reinforcement Learning for Swarm Systems , 2017, ArXiv.

[16]  Tom Schaul,et al.  StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.

[17]  Joel Z. Leibo,et al.  Value-Decomposition Networks For Cooperative Multi-Agent Learning , 2017, ArXiv.

[18]  Yi Wu,et al.  Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments , 2017, NIPS.

[19]  Shimon Whiteson,et al.  Counterfactual Multi-Agent Policy Gradients , 2017, AAAI.

[20]  Mykel J. Kochenderfer,et al.  Cooperative Multi-agent Control Using Deep Reinforcement Learning , 2017, AAMAS Workshops.

[21]  Frans A. Oliehoek,et al.  A Concise Introduction to Decentralized POMDPs , 2016, SpringerBriefs in Intelligent Systems.

[22]  Shimon Whiteson,et al.  Learning to Communicate with Deep Multi-Agent Reinforcement Learning , 2016, NIPS.

[23]  Dorian Kodelja,et al.  Multiagent cooperation and competition with deep reinforcement learning , 2015, PloS one.

[24]  Marc G. Bellemare,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[25]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[26]  Yang Wang,et al.  Multi-agent system design and evaluation for collaborative wireless sensor network in large structure health monitoring , 2010, Expert Syst. Appl..

[27]  Tarek R. Sheltami,et al.  Multi-agent-based clustering approach to wireless sensor networks , 2009, Int. J. Wirel. Mob. Comput..

[28]  J. Huang,et al.  Curse of dimensionality and particle filters , 2003, 2003 IEEE Aerospace Conference Proceedings (Cat. No.03TH8652).

[29]  Craig Boutilier,et al.  The Dynamics of Reinforcement Learning in Cooperative Multiagent Systems , 1998, AAAI/IAAI.

[30]  Yuntao Liu,et al.  Heterogeneous Skill Learning for Multi-agent Tasks , 2022, NeurIPS.

[31]  Guangming Xie,et al.  FOP: Factorizing Optimal Joint Policy of Maximum-Entropy Multi-Agent Reinforcement Learning , 2021, ICML.

[32]  Yunjie Gu,et al.  Shapley Q-value: A Local Reward Approach to Solve Global Reward Games , 2020, AAAI.

[33]  Hoong Chuin Lau,et al.  Hierarchical Multiagent Reinforcement Learning for Maritime Traffic Management , 2020, AAMAS.