An Integrated Multilevel Learning Approach to Multiagent Coalition Formation

In this paper we describe an integrated multilevel learning approach to multiagent coalition formation in a real-time environment. In our domain, agents negotiate to form teams to solve joint problems. The agent that initiates a coalition shoulders the responsibility of overseeing and managing the formation process. A coalition formation process consists of two stages. During the initialization stage, the initiating agent identifies the candidates of its coalition, i.e., known neighbors that could help. The initiating agent negotiates with these candidates during the finalization stage to determine the neighbors that are willing to help. Since our domain is dynamic, noisy, and time-constrained, the coalitions are not optimal. However, our approach employs learning mechanisms at several levels to improve the quality of the coalition formation process. At a tactical level, we use reinforcement learning to identify viable candidates based on their potential utility to the coalition, and case-based learning to refine negotiation strategies. At a strategic level, we use distributed, cooperative casebased learning to improve general negotiation strategies. We have implemented the above three learning components and conducted experiments in multisensor target tracking and CPU re-allocation applications.