Comments on An inquiry into computer understanding

vague statements could imply that a highly precise and mathematical theory must govern knowledge and behavior. Producing axiom systems has been a widespread intellectual pastime for 60 years, and so has been picking holes in them, but the final Bayesian theory is so transparent and elegant (if difficult to instantiate) that there can scarcely be a need for justification by minimalist assumptions and contrived arguments. Statisticians seem to prefer Savage’s postulates mentioned above, but see Shafer (1986) for a reconsideration. A major sticking point is the postulate which Cheeseman calls “completeness. ” What could it possibly mean that “Heckerman et al. showed that this is the main property violated by the DempsterShafer theory?” Not only is the term “main property” vague, but, more to the point, in my original papers 20 years ago (cf, Dempster (1968)) I made quite explicit that a major purpose of my theory, like Fisher’s fiducial theory before it, was to drop completeness. So there is nothing for Heckerman et al. to “show. ” The issue of whether, or more reasonably when, the Dempster-Shafer theory is a plausible weakening of Bayes is much too big to tackle here. However, since Cheeseman and other proponents of Bayes within A1 advocate a combination of deterministic logic with probabilistic reasoning, it is worth remarking that the theory of belief functions not only combines them, but it in fact unifies them, which is very likely its greatest selling point. Meanwhile, there is plenty of work to go round to make Bayes work.