Loss of skills in coordination games

This paper deals with 2-player coordination games with vanishing actions, which are repeated games where all diagonal payoffs are strictly positive and all non-diagonal payoffs are zero with the following additional property: At any stage beyond r, if a player has not played a certain action for the last r stages, then he unlearns this action and it disappears from his action set. Such a game is called an r-restricted game. To evaluate the stream of payoffs we use the average reward. For r = 1 the game strategically reduces to a one-shot game and for r ≥ 3 in Schoenmakers (Int Game Theory Rev 4:119–126, 2002) it is shown that all payoffs in the convex hull of the diagonal payoffs are equilibrium rewards. In this paper for the case r = 2 we provide a characterization of the set of equilibrium rewards for 2 × 2 games of this type and a technique to find the equilibrium rewards in m × m games. We also discuss subgame perfection.