Modeling difference rewards for multiagent learning
暂无分享,去创建一个
Difference rewards (a particular instance of reward shaping) have been used to allow multiagent domains to scale to large numbers of agents, but they remain difficult to compute in many domains. We present an approach to modeling the global reward using function approximation that allows the quick computation of shaped difference rewards. We demonstrate how this model can result in significant improvements in behavior for two air traffic control problems. We show how the model of the global reward may be either learned on- or off-line using a linear combination of neural networks.
[1] Prasad Tadepalli,et al. Scaling Model-Based Average-Reward Reinforcement Learning for Product Delivery , 2006, ECML.
[2] Kagan Tumer,et al. Distributed agent-based air traffic flow management , 2007, AAMAS '07.
[3] Kagan Tumer,et al. Analyzing and visualizing multiagent rewards in dynamic and stochastic domains , 2008, Autonomous Agents and Multi-Agent Systems.