The Theory is Predictive, but is it Complete?: An Application to Human Perception of Randomness

When we test a theory using data, it is common to focus on correctness: do the predictions of the theory match what we see in the data? But we also care about completeness: how much of the predictable variation in the data is captured by the theory? This question is difficult to answer, because in general we do not know how much "predictable variation" there is in the problem. In this paper, we consider approaches motivated by machine learning algorithms as a means of constructing a benchmark for the best attainable level of prediction. We illustrate our methods on the task of prediction of human-generated random sequences. Relative to an atheoretical machine learning algorithm benchmark, we find that existing behavioral models explain roughly 10 to 12 percent of the predictable variation in this problem. This fraction is robust across several variations on the problem. We also consider a version of this approach for analyzing field data from domains in which human perception and generation of randomness has been used as a conceptual framework; these include sequential decision-making and repeated zero-sum games. In these domains, our framework for testing the completeness of theories suggest that existing theoretical models may be more complete in their predictions for some domains than for others, suggesting that our methods can offer a comparative perspective across settings. Overall, our results indicate that (i) there is a significant amount of structure in this problem that existing models have yet to capture and (ii) there are rich domains in which machine learning may provide a viable approach to testing completeness.

[1]  G. Rath Randomization by Humans , 1966 .

[2]  A. Tversky,et al.  BELIEF IN THE LAW OF SMALL NUMBERS , 1971, Pediatrics.

[3]  W. A. Wagenaar Generation of random sequences by human subjects: A critical survey of literature. , 1972 .

[4]  A. Tversky,et al.  The hot hand in basketball: On the misperception of random sequences , 1985, Cognitive Psychology.

[5]  Colin Camerer Does the Basketball Market Believe in the 'Hot Hand'? , 1989 .

[6]  W. Wagenaar,et al.  The perception of randomness , 1991 .

[7]  Clifford Konold,et al.  Making Sense of Randomness " Implicit Encoding as a Basis for Judgment , 1997 .

[8]  A. Rapoport,et al.  Randomization in individual choice behavior. , 1997 .

[9]  M. Jackson,et al.  Bayesian Representation of Stochastic Processes under Learning: de Finetti Revisited , 1999 .

[10]  M. Rabin Inference by Believers in the Law of Small Numbers , 2000 .

[11]  Rachel T. A. Croson,et al.  The Gambler’s Fallacy and the Hot Hand: Empirical Data from Casinos , 2005 .

[12]  L. Kogan,et al.  The Gambler’s and Hot-Hand Fallacies: Theory and Applications , 2007 .

[13]  Using Methods from Machine Learning to Evaluate Behavioral Models of Choice Under Risk and Ambiguity , 2015 .

[14]  Daniel L. Chen,et al.  Decision-Making Under the Gambler's Fallacy: Evidence from Asylum Judges, Loan Officers, and Baseball Umpires , 2016 .

[15]  Drew Fudenberg,et al.  Predicting and Understanding Initial Play , 2019, American Economic Review.

[16]  Alexander Peysakhovich,et al.  Using Methods from Machine Learning to Evaluate Behavioral Models of Choice Under Risk and Ambiguity , 2015 .

[17]  John A. List,et al.  Behavior in Strategic Settings: Evidence from a Million Rock-Paper-Scissors Games , 2019, Games.