Philosophers and • "pseudognosticians" (the artificial intelligentsial) are coming more and more to recognize that they share common ground and that each can learn from the other. This has been generally recognized for many years as far as symbolic logic is concerned, but less so in relation to the foundations of probability. In this essay I hope to convince the pseudognostician that the philosophy of probability is relevant to his work. One aspect that I could have discussed would have been probabilistic causality (Good, 1961/62), in view of Hans Berliner's forthcoming paper "Inferring causality in tactical analysis", but my topic here will be mainly dynamic probability. The close relationship between philosophy and pseudognostics is easily understood, for philosophers often try to express as clearly as they can how people make judgments. To parody Wittgenstein, what can be said at all can be said clearly and it can be programmed A paradox might seem to arise. Formal systems, such as those used in mathematics, logic, and computer programming, can lead to deductions outside the system only when there is an input of assumptions. For example, no probability can be numerically inferred from the axioms of probability unless some probabilities are assumed without using the axioms: ex nihilo nihil fit.2 This leads to the main controversies in the foundations of statistics: the controversies of whether intuitive probability3 should be used in statistics and, if so, whether it should be logical probability (credibility) or subjective (personal). We who talk about the probabilities of hypotheses, or at least the relative probabilities of pairs of hypotheses (Good, 1950,1975) are obliged to use intuitive probabilities. It is difficult or impossible to lay down precise rules for specifying the numerical values of these probabilities, so some of us emphasize the need for subjectivity, bridled by axioms. At least one of us is convinced, and has repeatedly emphasized for the last thirty years, that a subjective probability can usually be judged only to lie in some interval of values, rather than having a
[1]
I. Good.
A FIVE-YEAR PLAN FOR AUTOMATIC CHESS
,
2013
.
[2]
S. Poirier.
Foundations of Mathematics
,
2007
.
[3]
Irving John Good,et al.
Explicativity, corroboration, and the relative odds of hypotheses
,
1975,
Synthese.
[4]
K. Vind.
A foundation for statistics
,
2003
.
[5]
Irving John Good,et al.
A LITTLE LEARNING CAN BE DANGEROUS*
,
1974,
The British Journal for the Philosophy of Science.
[6]
J. F. Crook,et al.
The Bayes/Non-Bayes Compromise and the Multinomial Distribution
,
1974
.
[7]
I. Good,et al.
The Diagnostic Process with Special Reference to Errors
,
1971,
Methods of Information in Medicine.
[8]
I. J. Good,et al.
Some statistical methods in machine intelligence research
,
1970
.
[9]
I. Good.
Corroboration, Explanation, Evolving Probability, Simplicity and a Sharpened Razor
,
1968,
The British Journal for the Philosophy of Science.
[10]
I. Good.
A Bayesian Significance Test for Multinomial Distributions
,
1967
.
[11]
I. Good.
On the Principle of Total Evidence
,
1967
.
[12]
Cedric A. B. Smith,et al.
Consistency in Statistical Inference and Decision
,
1961
.
[13]
I. Good.
Weight of Evidence, Corroboration, Explanatory Power, Information and the Utility of Experiments
,
1960
.
[14]
L. Stein,et al.
Probability and the Weighing of Evidence
,
1950
.
[15]
G. Polya,et al.
Heuristic Reasoning and the Theory of Probability
,
1941
.
[16]
B. O. Koopman.
The Axioms and Algebra of Intuitive Probability
,
1940
.