Efficient Team Formation Based on Learning and Reorganization and Influence of Communication Delay

We propose a method of distributed team formation that uses reinforcement learning and dynamic reorganization by taking into account communication delay in multi-agent systems (MAS). A task in a distributed environment is usually achieved by doing a number of subtasks that require different functions and resources. These subtasks have to be processed cooperatively in the appropriate team of agents that have the required functions with sufficient resources, but it is difficult to anticipate what kinds of tasks will be requested in the dynamic and open environment during the design stage of the system. It is also unknown whether or not their inter-agent network (that is, the organization of agents) is appropriate to form teams for the given tasks. In addition, communication delay between the agents always occurs in the actual systems, and this often causes a failure or delay of tasks. Therefore, both appropriate team formation and (re)organization suitable for the request patterns of incoming tasks and the environment where agents are deployed are required. The proposed method combines the learning for team formation and reorganization in a way that is adaptive to the environment. This includes task generation patterns and communication delay that may change dynamically. We show that it can improve the overall performance and increase the success rate of team formation in a dynamic environment.