Special section on dependence issues in knowledge-based systems

Approximate-reasoning methods rely on a host of evidence and knowledge manipulation functions to produce estimates of the truth of various hypotheses of interest. Among these functions two are particular important: information propagation and information combination [1]. The special section on Dependence Issues in Knowledge-Based Systems of this issue of the International Journal of Approximate Reasoning focuses on a most important aspect of information combination, namely the role of dependence between informational bodies in their aggregation into a fused result. Intuitively, the combination of information derived from identical sources should not result in a evidential body that is more informative than either of them while, on the other hand, information derived from independent sources should reinforce areas of common agreement while eliminating conclusions that are inconsistent with any of the sources being merged. The modeling of informational dependence and the selection of appropriate algorithms on the basis of knowledge about such dependence are obviously of paramount importance in the derivation of credible fused results. Depending on the approximate-reasoning methodology being employed it is sometimes possible to explicitly represent, through various modeling structures, the nature of the relations between multiple information sources. More often than not, however, such knowledge is only available in a partial, imprecise, and uncertain manner. The objective of this special section is to present novel results relevant to the fusion of dependent information in such situations. Although the notions of dependence and independence are very important, not only in approximate-reasoning approaches—including all forms of probabilistic and possibilistic methods—but also throughout system modeling and analysis, the papers published in this special section address the problem of combining partially dependent evidential bodies in the context of the theory of belief functions [2,3] and its generalizations [4,5]. The theory of belief functions is specially suited to the representation of imprecise knowledge both for its ability to model ignorance and for its formal relations to other approximate-reasoning formalisms. This methodology, however, still lacks general mechanisms to represent dependencies between sets of descriptive variables and to merge dependent evidence on the basis of such relations. Furthermore, there are still a number of open questions about the important notion of evidential independence. The works included in this issue are representative of various approaches to the formal characterization of the notion of independence, the representation and combination of dependent evidence, and the learning of combination formulas applicable to specific situations. Cattaneo derives two combination rules for the fusion of belief functions derived from not necessarily independent sources. These formulas approximate plausibility and commonality functions of the fused evidence so as to meet a number of requirements that should be reasonably demanded from any such formulation. In the context of such derivations the author proposes a new measure of evidential conflict that has advantages over that derived from the normalization factor associated with Dempster’s combination formula. In addition, these rules are related to the minimum rule of possibility theory, Dempster’s combination formula, and Denœux’s cautious rule [6]. As Cattaneo points out, his work should be regarded as a step towards the development of better approximations for the combined evidence. Jirous̆ek and Vejnarová approach the evidence-combination problem from the perspective of the representation of multidimensional belief distributions as a composition of marginal or conditional distributions of lower dimensionality. These compositional models, introduced earlier as an alternative to probabilistic graphical Markov models, rely on new notion of conditional independence to iteratively apply a composition operator to factorize complex distributions into smaller components. Originally developed in the context of classical probability theory, these models were later extended to represent distributions in possibility theory. The present work presents a new composition operator that extends the approach into the realm of the theory of belief functions permitting the generation of compositional representations that reveal relations of independence between variables. Monney, Chan, and Romberg present a reasoning model, based on a sound combination of classical logic and probability theory, to treat automatic classification problems where the discriminating features are dependent and not fully reliable.