Classically, an approach to the multiagent policy learning supposed that the agents, via interactions and/or by using preliminary knowledge about the reward functions of all players, would find an interdependent solution called "equilibrium". Recently, however, certain researchers question the necessity and the validity of the concept of equilibrium as the most important multiagent solution concept. They argue that a "good" learning algorithm is one that is efficient with respect to a certain class of counterparts. Adaptive players is an important class of agents that learn their policies separately from the maintenance of the beliefs about their counterparts' future actions and make their decisions based on that policy and the current belief. In this paper, we propose an efficient learning algorithm in presence of the adaptive counterparts called Adaptive Dynamics Learner (ADL), which is able to learn an efficient policy over the opponents' adaptive dynamics rather than over the simple actions and beliefs and, by so doing, to exploit these dynamics to obtain a higher utility than any equilibrium strategy can provide. We tested our algorithm on a substantial representative set of the most known and demonstrative matrix games and observed that ADL agent is highly efficient against Adaptive Play Q-learning (APQ) agent and Infinitesimal Gradient Ascent (IGA) agent. In self-play, when possible, ADL is able to converge to a Pareto optimal strategy maximizing the welfare of all players.
[1]
Craig Boutilier,et al.
The Dynamics of Reinforcement Learning in Cooperative Multiagent Systems
,
1998,
AAAI/IAAI.
[2]
Yoav Shoham,et al.
Run the GAMUT: a comprehensive approach to evaluating game-theoretic algorithms
,
2004,
Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004..
[3]
Gerald Tesauro,et al.
Extending Q-Learning to General Adaptive Multi-Agent Systems
,
2003,
NIPS.
[4]
H. Young,et al.
The Evolution of Conventions
,
1993
.
[5]
Yishay Mansour,et al.
Nash Convergence of Gradient Dynamics in General-Sum Games
,
2000,
UAI.
[6]
Leslie Pack Kaelbling,et al.
Playing is believing: The role of beliefs in multi-agent learning
,
2001,
NIPS.
[7]
Yoav Shoham,et al.
Learning against opponents with bounded memory
,
2005,
IJCAI.
[8]
Yoav Shoham,et al.
Multi-Agent Reinforcement Learning:a critical survey
,
2003
.
[9]
Manuela M. Veloso,et al.
Multiagent learning using a variable learning rate
,
2002,
Artif. Intell..
[10]
Brahim Chaib-draa,et al.
Apprentissage de la coordination multiagent. Une méthode basée sur le Q-learning par jeu adaptatif
,
2006,
Rev. d'Intelligence Artif..
[11]
Yoav Shoham,et al.
New Criteria and a New Algorithm for Learning in Multi-Agent Systems
,
2004,
NIPS.