Extensive-form games are a powerful tool for representing complex multi-agent interactions. Nash equilibrium strategies are commonly used as a solution concept for extensive-form games, but many games are too large for the computation of Nash equilibria to be tractable. In these large games, exploitability has traditionally been used to measure deviation from Nash equilibrium, and thus strategies are aimed to achieve minimal exploitability. However, while exploitability measures a strategy's worst-case performance, it fails to capture how likely that worst-case is to be observed in practice. In fact, empirical evidence has shown that a less exploitable strategy can perform worse than a more exploitable strategy in one-on-one play against a variety of opponents. In this work, we propose a class of response functions that can be used to measure the strength of a strategy. We prove that standard no-regret algorithms can be used to learn optimal strategies for a scenario where the opponent uses one of these response functions. We demonstrate the effectiveness of this technique in Leduc Hold'em against opponents that use the UCT Monte Carlo tree search algorithm.
[1]
Michael H. Bowling,et al.
Finding Optimal Abstract Strategies in Extensive-Form Games
,
2012,
AAAI.
[2]
Kevin Waugh,et al.
Strategy purification and thresholding: effective non-equilibrium approaches for playing large games
,
2012,
AAMAS.
[3]
Csaba Szepesvári,et al.
Bandit Based Monte-Carlo Planning
,
2006,
ECML.
[4]
Sylvain Gelly,et al.
Exploration exploitation in Go: UCT for Monte-Carlo Go
,
2006,
NIPS 2006.
[5]
Kevin Waugh,et al.
Accelerating Best Response Calculation in Large Extensive Games
,
2011,
IJCAI.
[6]
Kevin Waugh,et al.
A Practical Use of Imperfect Recall
,
2009,
SARA.
[7]
Michael H. Bowling,et al.
Evaluating state-space abstractions in extensive-form games
,
2013,
AAMAS.
[8]
Kevin Waugh,et al.
Abstraction pathologies in extensive games
,
2009,
AAMAS.
[9]
Kevin Waugh,et al.
Abstraction in Large Extensive Games
,
2009
.
[10]
A. Karimi,et al.
Master‟s thesis
,
2011
.
[11]
Michael H. Bowling,et al.
Regret Minimization in Games with Incomplete Information
,
2007,
NIPS.