Learning with Recency Bias

We examine the long-run implication of two models of learning with recency bias: recursive weights and limited memory. We show that both models generate similar beliefs, and that both have a weighted universal consistency property. Using the limited memory model we are able to produce learning procedures that are both weighted universally consistent and converge with probability one to strict Nash equilibrium, the rst example of which we are aware of learning procedures that have this convergence property and also have desirable properties for the individual agents who use them. JEL Classi cation Numbers: 001,002