A number of well-known theorems, such as Cox's theorem and de Finetti's theorem. prove that any model of reasoning with uncertain information that satisfies specified conditions of "rationality" must satisfy the axioms of probability theory. I argue here that these theorems do not in themselves demonstrate that probabilistic models are in fact suitable for any specific task in automated reasoning or plausible for cognitive models. First, the theorems only establish that there exists some probabilistic model; they do not establish that there exists a useful probabilistic model, i.e. one with a tractably small number of numerical parameters and a large number of independence assumptions. Second, there are in general many different probabilistic models for a given situation, many of which may be far more irrational, in the usual sense of the term, than a model that violates the axioms of probability theory. I illustrate this second point with an extended examples of two tasks of induction, of a similar structure, where the reasonable probabilistic models are very different.
[1]
A. Tversky,et al.
On the study of statistical intuitions
,
1982,
Cognition.
[2]
Robert A Jacobs,et al.
Bayesian learning theory applied to human cognition.
,
2011,
Wiley interdisciplinary reviews. Cognitive science.
[3]
R. T. Cox,et al.
The Algebra of Probable Inference
,
1962
.
[4]
J. Tenenbaum,et al.
Infants consider both the sample and the sampling process in inductive generalization
,
2010,
Proceedings of the National Academy of Sciences.
[5]
Peter Norvig,et al.
Artificial Intelligence: A Modern Approach
,
1995
.
[6]
R. T. Cox.
The Algebra of Probable Inference
,
1962
.
[7]
R. T. Cox.
Probability, frequency and reasonable expectation
,
1990
.
[8]
E. Davis,et al.
How Robust Are Probabilistic Models of Higher-Level Cognition?
,
2013,
Psychological science.