Algorithms for exact and approximate inference in stochastic logic programs (SLPs) are presented, based respectively, on variable elimination and importance sampling. We then show how SLPs can be used to represent prior distributions for machine learning, using (i) logic programs and (ii) Bayes net structures as examples. Drawing on existing work in statistics, we apply the Metropolis-Hasting algorithm to construct a Markov chain which samples from the posterior distribution. A Prolog implementation for this is described. We also discuss the possibility of constructing explicit representations of the posterior.
[1]
Stefan Riezler,et al.
Probabilistic Constraint Logic Programming
,
1997,
ArXiv.
[2]
H. Chipman,et al.
Bayesian CART Model Search
,
1998
.
[3]
Adrian F. M. Smith,et al.
A Bayesian CART algorithm
,
1998
.
[4]
Stephen Muggleton.
Semantics and derivation for Stochastic Logic Programs
,
2000
.
[5]
S. Muggleton.
Stochastic Logic Programs
,
1996
.
[6]
James Cussens,et al.
Loglinear models for first-order probabilistic reasoning
,
1999,
UAI.