Belief Functions Based Parameter and Structure Learning of Bayesian Networks in the Presence of Missing Data

Existing methods of parameter and structure learning of Bayesian Networks (BNs) from a database assume that the database is complete. If there are missing values, they are assumed to be missing at random. This paper incorporates the concepts used in Dempster-Shafer theory of belief functions to learn both the parameters and structure of BNs. Instead of filling the missing values by their estimates, as it is done in the conventional techniques, the proposed approach models the missing values as representing ignorance or lack of belief of a system modeler in the actual state of the corresponding variables. The proposed representation modifies the existing algorithms for parameter and structure learning of BNs. The representation also allows a system modeler to add new findings in terms of support functions as used in belief functions theory; thus, providing a richer way to enter evidence in BNs.

[1]  D. Schum The Evidential Foundations of Probabilistic Reasoning , 1994 .

[2]  Paola Sebastiani,et al.  Parameter Estimation in Bayesian Networks from Incomplete Databases , 1998, Intell. Data Anal..

[3]  A. Hasman,et al.  Probabilistic reasoning in intelligent systems: Networks of plausible inference , 1991 .

[4]  Paola Sebastiani,et al.  Learning Bayesian Networks from Incomplete Databases , 1997, UAI.

[5]  Finn V. Jensen,et al.  Bayesian Networks and Decision Graphs , 2001, Statistics for Engineering and Information Science.

[6]  S. Haider,et al.  A Hybrid Approach for Learning Parameters of Probabilistic Networks from Incomplete Databases , 2003, HIS.

[7]  Glenn Shafer,et al.  A Mathematical Theory of Evidence , 2020, A Mathematical Theory of Evidence.

[8]  David Maxwell Chickering,et al.  Learning Bayesian Networks: The Combination of Knowledge and Statistical Data , 1994, Machine Learning.

[9]  S. Lauritzen The EM algorithm for graphical association models with missing data , 1995 .

[10]  Cungen Cao,et al.  Learning Conditional Probabilities for Dynamic Influence Structures in Medical Decision Models , 1997, AMIA.

[11]  Eugene Charniak,et al.  Bayesian Networks without Tears , 1991, AI Mag..

[12]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[13]  Gregory F. Cooper,et al.  A Bayesian method for the induction of probabilistic networks from data , 1992, Machine Learning.

[14]  Tomas Hrycej,et al.  Gibbs Sampling in Bayesian Networks , 1990, Artif. Intell..

[15]  Nir Friedman,et al.  Learning Belief Networks in the Presence of Missing Values and Hidden Variables , 1997, ICML.

[16]  Sajjad Haider,et al.  On computing marginal probability intervals in inference networks , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[17]  Wray L. Buntine A Guide to the Literature on Learning Probabilistic Networks from Data , 1996, IEEE Trans. Knowl. Data Eng..