THE LOGIC OF STATISTICAL INFERENCE1

To review a book seven years after its publication is unusual. The distribution of elapsed times between publication and review is probably multimodal, with a peak at a relatively short time, and subsidiary peaks at times corresponding to jubilees, centenaries, and so forth. It is a measure of the importance of Hacking's work that, in spite of the fact that the foundations of statistical inference have for ten years past been an area of very active controversy, a discussion restricted to his major theses still seems appropriate and up-to-date. Re-reading the book one is again impressed with its easy-flowing style, full of felicitous phrases—such as 'cheerful concordat' to describe the current state of divided opinion on the foundations of set theory—but with careful attention to logical niceties. It will have been read by all who have been concerned with the foundations of statistical inference, and it is to be hoped that it will continue to be read by more and more, especially by mathematical statisticians who are all too prone to hare off into abstract mathematics without taking proper care to ensure that their mathematical model is relevant to the scientific or practical situation. The simplest mathematical models for inferential processes are those which were first explicitly set forth by Neyman and Pearson. The elements are (i) a sample space S of possible results of the experiment in question; for instance, if we are tossing a penny ten times, S consists of the 2 sequences like HHTHTTTHTH which could represent the results of the tosses, in the order in which they occurred; (ii) a parameter space Q of possible values for an unknown parameter 0; for instance, Q might consist of the points in the open unit interval {0: o < 0 < 1}, or the closed unit interval {0: o < 0 < 1}; (iii) a functionp(x,0) of two variables, * ranging over S and 0 ranging over Q, specifying the probability of getting the result x when the true value of the parameter is 0; for instance we may have