Local decision-making in multi-agent systems

This thesis presents a new approach to local decision-making in multi-agent systems with varying amounts of communication. Here, local decision-making refers to action choices which are made in a decentralized fashion by individual agents based on the information which is locally available to them.The work described here is set within the multi-agent decision process framework. Unreliable, faulty or stochastic communication patterns present a challenge to these settings which usually rely on precomputed, centralised solutions to control individual action choices.Various approximate algorithms for local decision-making are developed for scenarios with and without sequentiality. The construction of these techniques is based strongly on methods of Bayesian inference. Their performance is tested on synthetic benchmark scenarios and compared to that of a more conservative approach which guarantees coordinated action choices as well as a completely decentralized solution. In addition, the method is applied to a surveillance task based on real-world data.These simulation results show that the algorithms presented here can outperform more traditional approaches in many settings and provide a means for flexible, scalable decision-making in systems with varying information exchange between agents.