This paper deals with 2-player coordination games with vanishing actions, which are repeated games where all diagonal payoffs are strictly positive and all non-diagonal payoffs are zero with the following additional property: At any stage beyond r, if a player has not played a certain action for the last r stages, then he unlearns this action and it disappears from his action set. Such a game is called an r-restricted game. To evaluate the stream of payoffs we use the average reward. For r = 1 the game strategically reduces to a one-shot game and for r ≥ 3 in Schoenmakers (Int Game Theory Rev 4:119–126, 2002) it is shown that all payoffs in the convex hull of the diagonal payoffs are equilibrium rewards. In this paper for the case r = 2 we provide a characterization of the set of equilibrium rewards for 2 × 2 games of this type and a technique to find the equilibrium rewards in m × m games. We also discuss subgame perfection.
[1]
Nicolas Vieille,et al.
Two-player stochastic games II: The case of recursive games
,
2000
.
[2]
P. Borm.
A classification of 2×2 bimatrix games
,
1987
.
[3]
János Flesch,et al.
Coordination Games with vanishing Actions
,
2002,
IGTR.
[4]
A. Neyman,et al.
Stochastic games
,
1981
.
[5]
K. Arrow.
The Economic Implications of Learning by Doing
,
1962
.
[6]
Hans Peters,et al.
Unlearning by Not Doing: Repeated Games with Vanishing Actions
,
1995
.
[7]
N. Vieille.
Two-player stochastic games I: A reduction
,
2000
.