Future sparse interactions : a MARL approach
暂无分享,去创建一个
Recent research has demonstrated that considering local interactions among agents in specific parts of the state space, is a successful way of simplifying the multi-agent learning process. By taking into account other agents only when a conflict is possible, an agent can significantly reduce the state-action space in which it learns. Current approaches, however, consider only the immediate rewards for detecting conflicts. In this paper, we contribute a reinforcement learning algorithm that learns when a strategic interaction among agents is needed, several time-steps before the conflict is reflected by the (immediate) reward signal.
[1] Nikos A. Vlassis,et al. Utile Coordination: Learning Interdependencies Among Cooperative Agents , 2005, CIG.
[2] Peter Vrancx,et al. Learning multi-agent state space representations , 2010, AAMAS.
[3] Manuela M. Veloso,et al. Learning of coordination: exploiting sparse interactions in multiagent systems , 2009, AAMAS.