MADES: A Unified Framework for Integrating Agent-Based Simulation with Multi-Agent Reinforcement Learning

Agent-Based Simulation (ABS) provides distributed entities for simulating agent emergence or interactive behaviors, but the agent behaviors usually rely on the hard rules, thus lacking the intelligent decision-making capability. With the development of artificial intelligence, Multi-Agent Reinforcement Learning (MARL) has shown positive potential in robot control, autonomous driving, and human-machine battles as its powerful learning capability for making intelligent decisions. There are many challenges in applying MARL directly to ABS, and there is no unified framework that integrates them. The paper proposed the Multi-Agent Discrete Event Simulation (MADES) framework based on several DEVS atomic models to construct the multi-agent system, which has advantages for representing various MARL architectures. A predator-prey system simulation with a mainstream MARL algorithm is built under our framework, the training curves and event transition time figure have verified the learning and the simulation performance of the framework.