Optimal Rewards for Cooperative Agents

Following work on designing optimal rewards for single agents, we define a multiagent optimal rewards problem (ORP) in cooperative (specifically, common-payoff or team) settings. This new problem solves for individual agent reward functions that guide agents to better overall team performance relative to teams in which all agents guide their behavior with the same given team-reward function. We present a multiagent architecture in which each agent learns good reward functions from experience using a gradient-based algorithm in addition to performing the usual task of planning good policies (except in this case with respect to the learned rather than the given reward function). Multiagency introduces the challenge of nonstationarity: because the agents learn simultaneously, each agent's reward-learning problem is nonstationary and interdependent on the other agents evolving reward functions. We demonstrate on two simple domains that the proposed architecture outperforms the conventional approach in which all the agents use the same given team-reward function (even when accounting for the resource overhead of the reward learning); that the learning algorithm performs stably despite the nonstationarity; and that learning individual reward functions can lead to better specialization of roles than is possible with shared reward, whether learned or given.

[1]  Kathryn E. Merrick,et al.  Intrinsic Motivation and Introspection in Reinforcement Learning , 2012, IEEE Transactions on Autonomous Mental Development.

[2]  Edmund H. Durfee,et al.  Characterizing EVOI-Sufficient k-Response Query Sets in Decision Problems , 2014, AISTATS.

[3]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[4]  Stephen Hart,et al.  Learning Generalizable Control Programs , 2011, IEEE Transactions on Autonomous Mental Development.

[5]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[6]  Pierre-Yves Oudeyer,et al.  What is Intrinsic Motivation? A Typology of Computational Approaches , 2007, Frontiers Neurorobotics.

[7]  Pierre-Yves Oudeyer,et al.  Intrinsic Motivation Systems for Autonomous Mental Development , 2007, IEEE Transactions on Evolutionary Computation.

[8]  A. Pentland,et al.  Collective intelligence , 2006, IEEE Comput. Intell. Mag..

[9]  Andrew G. Barto,et al.  Intrinsically Motivated Hierarchical Skill Learning in Structured Environments , 2010, IEEE Transactions on Autonomous Mental Development.

[10]  Andrew Y. Ng,et al.  Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping , 1999, ICML.

[11]  Richard L. Lewis,et al.  Reward Design via Online Gradient Ascent , 2010, NIPS.

[12]  David H. Wolpert,et al.  Collective Intelligence , 1999 .

[13]  Peter Auer,et al.  Finite-time Analysis of the Multiarmed Bandit Problem , 2002, Machine Learning.

[14]  E. Uchibe,et al.  Constrained reinforcement learning from intrinsic and extrinsic rewards , 2007, 2007 IEEE 6th International Conference on Development and Learning.

[15]  Jürgen Schmidhuber,et al.  Artificial curiosity based on discovering novel algorithmic predictability through coevolution , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[16]  Richard L. Lewis,et al.  Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective , 2010, IEEE Transactions on Autonomous Mental Development.

[17]  Richard L. Lewis,et al.  Strong mitigation: nesting search for good policies within search for good reward , 2012, AAMAS.

[18]  Jan Peters,et al.  Imitation and Reinforcement Learning , 2010, IEEE Robotics & Automation Magazine.

[19]  Ana Paiva,et al.  Emerging social awareness: Exploring intrinsic motivation in multiagent learning , 2011, 2011 IEEE International Conference on Development and Learning (ICDL).

[20]  Stefan Schaal,et al.  Robot Learning From Demonstration , 1997, ICML.

[21]  David C. Parkes,et al.  An MDP-Based Approach to Online Mechanism Design , 2003, NIPS.

[22]  Richard L. Lewis,et al.  Optimal Rewards versus Leaf-Evaluation Heuristics in Planning Agents , 2011, AAAI.

[23]  Jürgen Schmidhuber,et al.  Curious model-building control systems , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[24]  Pierre-Yves Oudeyer,et al.  R-IAC: Robust Intrinsically Motivated Exploration and Active Learning , 2009, IEEE Transactions on Autonomous Mental Development.

[25]  Richard L. Lewis,et al.  Internal Rewards Mitigate Agent Boundedness , 2010, ICML.

[26]  B. Skinner,et al.  The Behavior of Organisms: An Experimental Analysis , 2016 .

[27]  Pieter Abbeel,et al.  Apprenticeship learning for helicopter control , 2009, CACM.

[28]  Daphne Koller,et al.  Making Rational Decisions Using Adaptive Utility Elicitation , 2000, AAAI/IAAI.

[29]  Lee Spector,et al.  Genetic Programming for Reward Function Search , 2010, IEEE Transactions on Autonomous Mental Development.

[30]  Terrence J. Sejnowski,et al.  A 'Neural' Network that Learns to Play Backgammon , 1987, NIPS.