Bayesian Conditionalisation and the Principle of Minimum Information

The use of the principle of minimum information, or equivalently the principle of maximum entropy, has been advocated by a number of authors over recent years both in statistical physics as well as more generally in statistical inference.' It has perhaps not been sufficiently appreciated by philosophers, however, that this principle, when properly understood, affords a rule of inductive inference of the widest generality.2 The purpose of this paper is to draw attention to the generality of the principle. Thus the Bayesian rule of conditionalisation, as well as its extension by R. C. Jeffrey, will be exhibited as special cases. General conditions under which it yields a unique prescription will also be studied. Detailed treatment will be restricted to the finite-dimensional case but an outline of the general case is given in the Appendix. The underlying idea of maximum entropy inference is this. Suppose P to be a probability distribution assigning probabilities px, ..., p, to n mutually exclusive and jointly exhaustive events. Then the information-theoretic entropy of the distribution is defined by