FREQUENTIST AND BAYESIAN STATISTICS: A CRITIQUE

There are two broad approaches to formal statistical inference taken as concerned with the development of methods for analysing noisy empirical data and in particular as the attaching of measures of uncertainty to conclusions. The object of this paper is to summarize what is involved. The issue is this. We have data represented collectively by y and taken to be the observed value of a vector random variable Y having a distribution determined by unknown parameters θ = (ψ, λ). Here ψ is a parameter of interest, often corresponding to a signal whereas λ represents such features as aspects of the data-capture procedure, background noise and so on. In this, probability is an (idealized) representation of the stability of long-run frequencies, whereas ψ aims to encapsulate important underlying physical parameters that are free from the accidents of the specific data under analysis. How should we estimate ψ and how should we express our uncertainties about ψ? In the following discussion we assume that the probability model correctly represents the underlying physics. This means that issues of model criticism and possible model reformulation that arise in many other applications of statistical methods can be disregarded.