Most real-world data is stored in relational form. In contrast, most statistical learning methods, e.g., Bayesian network learning, work only with “flat” data representations, forcing us to convert our data into a form that loses much of the relational structure. The recently introduced framework of probabilistic relational models(PRMs) allow us to represent much richer dependency structures, involving multiple entities and the relations between them; they allow the properties of an entity to depend probabilistically on properties of related entities. Friedman et al. showed how to learn PRMs that model the attribute uncertainty in relational data, and presented techniques for learning both parameters and probabilistic dependency structure for the attributes in a relational model. In this work, we propose methods for handling structural uncertainty in PRMs. Structural uncertainty is uncertainty over which entities are related in our domain. We propose two mechanisms for modeling structural uncertainty: reference uncertaintyand existence uncertainty. We describe the appropriate conditions for using each model and present learning algorithms for each. We conclude with some preliminary experimental results comparing and contrasting the use of these mechanism for learning PRMs in domains with structural uncertainty.
[1]
David Heckerman,et al.
A Tutorial on Learning with Bayesian Networks
,
1998,
Learning in Graphical Models.
[2]
Avi Pfeffer,et al.
Probabilistic Frame-Based Systems
,
1998,
AAAI/IAAI.
[3]
Peter Haddawy,et al.
Answering Queries from Context-Sensitive Probabilistic Knowledge Bases (cid:3)
,
1996
.
[4]
David Poole,et al.
Probabilistic Horn Abduction and Bayesian Networks
,
1993,
Artif. Intell..
[5]
Douglas C. Schmidt,et al.
Learning probabilistic relational models
,
2001
.
[6]
Lise Getoor,et al.
Learning Probabilistic Relational Models
,
1999,
IJCAI.