Within the realm of statistical relational knowledge representation formalisms, Markov logic is perhaps one of the most flexible and general languages, for it generalises both first-order logic (for finite domains) and probabilistic graphical models. Knowledge engineering with Markov logic is, however, not a straightforward task. In particular, modelling approaches that are too firmly rooted in the principles of logic often tend to produce unexpected results in practice. In this paper, I collect a number of issues that are relevant to knowledge engineering practice: I describe the fundamental semantics of Markov logic networks and explain how simple probabilistic properties can be represented. Furthermore, I discuss fallacious modelling assumptions and summarise conditions under which generalisation across domains may fail. As a collection of fundamental insights, the paper is primarily directed at knowledge engineers who are new to Markov logic.
[1]
Matthew Richardson,et al.
Markov logic networks
,
2006,
Machine Learning.
[2]
Michael Beetz,et al.
Extending Markov Logic to Model Probability Distributions in Relational Domains
,
2007,
KI.
[3]
Jens Fisseler,et al.
Toward Markov Logic with Conditional Probabilities
,
2008,
FLAIRS.
[4]
Joseph Y. Halpern.
An Analysis of First-Order Logics of Probability
,
1989,
IJCAI.
[5]
Ben Taskar,et al.
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
,
2007
.
[6]
Pedro M. Domingos,et al.
Recursive Random Fields
,
2007,
IJCAI.
[7]
Michael Beetz,et al.
Adaptive Markov Logic Networks: Learning Statistical Relational Models with Dynamic Parameters
,
2010,
ECAI.