Coordination Structures Generated by Deep Reinforcement Learning in Distributed Task Executions

We investigate the coordination structures generated by deep Q-network (DQN) in a distributed task execution. Cooperation and coordination are the crucial issues in multi-agent systems, and very sophisticated design or learning is required in order to achieve effective structures or regimes of coordination. In this paper, we show the results that agents establish the division of labor in a bottom-up manner by determining their implicit responsible area when input structure for DQN is constituted by their own observation and absolute location.