Generating Degrees of Belief from Statistical Information: An Overview

Consider an agent (or expert system) with a knowledge base KB that includes statistical information (such as “90% of patients with jaundice have hepatitis”), first-order information (“all patients with hepatitis have jaundice”), and default information (“patients with jaundice typically have a fever”). A doctor with such a KB may want to assign a degree of belief to an assertion ϕ such as “Eric has hepatitis”. Since the actions the doctor takes may depend crucially on this degree of belief, we would like to specify a mechanism by which she can use her knowledge base to assign a degree of belief to ϕ in a principled manner. We have been investigating a number of techniques for doing so; in this paper we give an overview of one of them. The method, which we call the random worlds method, is a natural one: For any given domain size N, we consider the fraction of models satisfying ϕ among models of size N satisfying KB. If we do not know the domain size N, but know that it is large, we can approximate the degree of belief in ϕ given KB by taking the limit of this fraction as N goes to infinity. As we show, this approach has many desirable features. In particular, in many cases that arise in practice, the answers we get using this method provably match heuristic assumptions made in many standard AI systems.

[1]  Rudolf Carnap,et al.  The continuum of inductive methods , 1952 .

[2]  Joseph Y. Halpern,et al.  Asymptomatic conditional probabilities for first-order logic , 1992, STOC '92.

[3]  Nils J. Nilsson,et al.  Probabilistic Logic * , 2022 .

[4]  Daniel Lehmann,et al.  What does a Conditional Knowledge Base Entail? , 1989, Artif. Intell..

[5]  H. Jeffreys Logical Foundations of Probability , 1952, Nature.

[6]  Glenn Shafer,et al.  A Mathematical Theory of Evidence , 2020, A Mathematical Theory of Evidence.

[7]  Fahiem Bacchus,et al.  Representing and reasoning with probabilistic knowledge , 1988 .

[8]  Judea Pearl,et al.  Probabilistic Semantics for Nonmonotonic Reasoning: A Survey , 1989, KR.

[9]  I. Good,et al.  The Maximum Entropy Formalism. , 1979 .

[10]  W. E. Johnson I.—PROBABILITY: THE DEDUCTIVE AND INDUCTIVE PROBLEMS , 1932 .

[11]  Joseph Y. Halpern,et al.  Statistical Foundations for Default Reasoning , 1993, IJCAI.

[12]  Joseph Y. Halpern,et al.  Random worlds and maximum entropy , 1992, [1992] Proceedings of the Seventh Annual IEEE Symposium on Logic in Computer Science.

[13]  Raymond Reiter,et al.  On Interacting Defaults , 1981, IJCAI.

[14]  Sarit Kraus,et al.  Nonmonotonic Reasoning, Preferential Models and Cumulative Logics , 1990, Artif. Intell..

[15]  T. Fine,et al.  The Emergence of Probability , 1976 .

[16]  Mingsheng Ying,et al.  A logic for approximate reasoning , 1994, Journal of Symbolic Logic.

[17]  J. Kries Die Principien der Wahrscheinlichkeits-Rechnung : eine Logische Untersuchung , 2009 .

[18]  E. T. Jaynes,et al.  Where do we Stand on Maximum Entropy , 1979 .

[19]  Joseph Y. Halpern,et al.  From Statistics to Beliefs , 1992, AAAI.

[20]  H. Jeffreys A Treatise on Probability , 1922, Nature.