Designing Economic Agents that Act Like Human Agents: A Behavioral Approach to Bounded Rationality

Most economists accept that there are limits to the reasoning abilities of human beings-that human rationality is bounded. The question is how to model economic choices made under these limits. Where, between perfect rationality and its complete absence, are we to set the "dial of rationality," and how do we build this dial setting in to our theoretical models? One approach to this problem is to lay down axioms or assumptions that suppose limits to economic agents' computational ability or memory, and investigate their consequences. This is useful, but it begs the question of how humans actually behave. A different approach (the one I suggest here) is to develop theoretical economic agents that act and choose in the way actual humans do. We could do this by representing agents as using parametrized decision algorithms, and choose and calibrate these algorithms so that the agents' behavior matches real human behavior observed in the same decision context. Theoretical models using these "calibrated agents" would then, we could claim, furnish predictions based on actual rather than idealized behavior. It is unlikely there exists some yet-to-bedefined decision algorithm, some "model of man," that would represent human behavior in all economic problems-an algorithm whose parameters would constitute universal constants of human behavior. Different contexts of decision making in the economy call for different actions; and an algorithm calibrated to reproduce human learning in a search problem might differ from one that reproduces strategic-choice behavior. We would likely need a repertoire of calibrated algorithms to cover the various contexts that might arise. Nevertheless, for a particular context of decision making, calibrating theoretical behavior to match human behavior would allow us to ask questions that are not answerable at present under the assumption of either perfect rationality or idealized learning. We might want to know whether a given neoclassical model with human agents represented by "calibrated agents" will result in some standard asymptotic patterna rational-expectations equilibrium, say. We might ask whether agents calibrated to learn as humans do converge to some form of optimality, or interactively to a Nash equilibrium.' And we might want to study the speed of adaptation in a particular economic model with human agents represented by calibrated agents. What would it mean to calibrate an algorithm to "reproduce" human behavior? The object would be algorithmic behavior that reproduces statistically the characteristics of human choice, including the distinctive errors or departures from rationality that hutDiscussants: Ken Binmore, University of Michigan; Drew Fudenberg, MIT; John Geanakoplos, Yale University.