Limit Points of Endogenous Misspecified Learning

We study how an agent learns from endogenous data when their prior belief is misspecified. We show that only uniform Berk–Nash equilibria can be long‐run outcomes, and that all uniformly strict Berk–Nash equilibria have an arbitrarily high probability of being the long‐run outcome for some initial beliefs. When the agent believes the outcome distribution is exogenous, every uniformly strict Berk–Nash equilibrium has positive probability of being the long‐run outcome for any initial belief. We generalize these results to settings where the agent observes a signal before acting.