A UNIFIED THEORY OF ESTIMATION

0. Introduction and summary. This paper extends and unifies some previous formulations and theories of estimation for one-parameter problems. The basic criterion used is admissibility of a point estimator, defined with reference to its full distribution rather than special loss functions such as squared error. Theoretical methods of characterizing admissible estimators are given, and practical computational methods for their use are illustrated. Point, confidence limit, and confidence interval estimation are included in a single theoretical formulation, and incorporated into estimators of an "omnibus" form called "confidence curves." The usefulness of the latter for some applications as well as theoretical purposes is illustrated. Fisher's maximum likelihood principle of estimation is generalized, given exact (non-asymptotic) justification, and unified with the theory of tests and confidence regions of Neyman and Pearson. Relations between exact and asymptotic results are discussed. Further developments, including multiparameter and nuisance parameter problems, problems of choice among admissible estimators, formal and informal criteria for optimality, and related problems in the foundations of statistical inference, will be presented subsequently. 1. A broad formulation of the problem of point estimation. We consider problems of estimation with reference to a specified experiment E, leaving aside here questions of experimental design including those of choice of a sample size or a sequential sampling rule; some definite sampling rule, possibly sequential, is assumed specified as part of E. Let S = {x} denote the sample space of possible outcomes x of the experiment. Let f(x, 0) denote one of the elementary probability functions on S which are specified as possibly true. Let Q = { O} denote the specified parameter space. For each 0 in Q and for each subset of A of S, the probability that E yields an outcome x in A is given by Prob {X e A I O} = fAf(x, 0)d,(x), where , is a specified o-finite measure on S. (We assume tacitly here and below that consideration is appropriately restricted to measurable sets and functions only.) If y = yy(O) is any function defined on Q2(e.g., yy(O) -0 or yy(O) _ ), with range r, a point estimator of 'y is any measurable function g = g(x) taking values in r (or in P, its closure, if, for example, P is an open interval). The problem of