Steady state learning and Nash equilibrium

We study the steady states of a system in which players learn about the strategies their opponents are playing by updating their Bayesian priors in light of their observations. Players are matched at random to play a fixed extensive-form game, and each player observes the realized actions in his own matches, but not the intended off-path play of his opponents or the realized actions in other matches. Because players are assumed to live finite lives, there are steady states in which learning continually takes place. If lifetimes are long and players are very patient, the steady state distribution of actions approximates that of a Nash equilibrium.