Choosing Between Adaptive Agents

Even with ample time and opportunity to use extensive data, people often make do with small samples, which increases their risk of making the wrong decision. A theoretical analysis indicates, however, that when the decision involves continually selecting among competing, adaptive agents who are eager to be selected, an error-prone evaluation may be beneficial to the decision maker. In this case, the chance of an error can motivate competitors to exert greater effort, improving their level of performance—which is the prime concern of the decision maker. This theoretical argument was tested empirically by comparing the effects of two levels of scrutiny of performance. Results show that minimal scrutiny can indeed lead to better performance than full scrutiny, and that the effect is conditional on a bridgeable difference between the competitors. We conclude by pointing out that small-sample-based, error-prone decisions may also maintain competition and diversity in the environment.

[1]  J. Heineke,et al.  The Allocation of Effort under Uncertainty: The Case of Risk-Averse Behavior , 1973, Journal of Political Economy.

[2]  D. Budescu,et al.  Averaging probability judgments: Monte Carlo analyses of asymptotic diagnostic value , 2001 .

[3]  R. Hertwig,et al.  Decisions from Experience and the Effect of Rare Events in Risky Choice , 2004, Psychological science.

[4]  K. Fiedler,et al.  Does decision quality (always) increase with the size of information samples? Some vicissitudes in applying the law of large numbers. , 2006, Journal of experimental psychology. Learning, memory, and cognition.

[5]  P. Killeen,et al.  An Alternative to Null-Hypothesis Significance Tests , 2005, Psychological science.

[6]  E. Lazear,et al.  Rank-Order Tournaments as Optimum Labor Contracts , 1979, Journal of Political Economy.

[7]  Walter N. Thurman,et al.  Testing the Theory of Tournaments: An Empirical Analysis of Broiler Production , 1994, Journal of Labor Economics.

[8]  Cameron R. Peterson,et al.  Information seeking: Optional versus fixed stopping , 1969 .

[9]  Amihai Glazer,et al.  More monitoring can induce less effort , 1996 .

[10]  J. Busemeyer Decision making under uncertainty: a comparison of simple scalability, fixed-sample, and sequential-sampling models. , 1985, Journal of experimental psychology. Learning, memory, and cognition.

[11]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[12]  K. Fiedler,et al.  Nonproportional Sampling and the Amplification of Correlations , 2006, Psychological science.

[13]  Pradeep Dubey,et al.  Competitive prizes: when less scrutiny induces more effort , 2001 .

[14]  E. Weber,et al.  Predicting Risk-Sensitivity in Humans and Lower Animals: Risk as Variance or Coefficient of Variation , 2004, Psychological review.

[15]  Maya Bar-Hillel,et al.  The role of sample size in sample evaluation , 1979 .

[16]  A. Tversky,et al.  BELIEF IN THE LAW OF SMALL NUMBERS , 1971, Pediatrics.

[17]  Eric J. Johnson,et al.  Adaptive Strategy Selection in Decision Making. , 1988 .

[18]  Pradeep Dubey,et al.  Optimal scrutiny in multi-period promotion tournaments , 2003, Games Econ. Behav..

[19]  Y. Kareev On The Perception Of Consistency , 2003 .

[20]  Klaus Fiedler,et al.  Information Sampling and Adaptive Cognition , 2005 .

[21]  H. Simon,et al.  Invariants of human behavior. , 1990, Annual review of psychology.

[22]  P. Sedlmeier Information Sampling and Adaptive Cognition: Intuitive Judgments about Sample Size , 2005 .

[23]  A. Rapoport,et al.  Generation of random series in two-person strictly competitive games , 1992 .