Evolutionary Foundations of Solution Concepts for Finite, Two-Player, Normal-Form Games

This paper develops evolutionary foundations for noncooperative game-theoretic solution concepts. In particular, we envision a game as being repeatedly played by randomly, anonymously matched members of two populations. Agents initially play arbitrarily chosen pure strategies. As play progresses, a learning process or selection mechanism induces agents to switch from less to more profitable strategies. The limiting outcomes of this dynamic process yield equilibria for the game in question, and the plausibility of an equilibrium concept then rests on the characteristics of the selection process from which it arises. The results suggest that if one accepts the evolutionary approach to equilibrium concepts, then one will embrace either rationalizable or perfect equilibria. The choice between the two hinges upon whether the evolutionary process is sufficiently well behaved as to yield convergence. In general, there are robust adjustment processes which converge as well as robust processes which do not converge.