Highly expressive probabilistic modeling languages are capable of describing a wide variety of models. Some of these models are quite complex, so approximate inference algorithms are needed. One approach to approximate inference is importance sampling, but this can be hard to do in expressive languages because of the many deterministic relationships between concepts. This paper presents an importance sampling algorithm for the IBAL language based on the principle of using the structure of a model to infer as much as possible about a decision before making a commitment. The paper demonstrates using a musical example how easy it is to encode interesting new models in IBAL. Results show that the importance sampling algorithm is able to make useful inferences, and is far superior to a rejection sampling algorithm. The paper presents proof of concept on the musical example that the algorithm is capable of handling real applications.
[1]
Luc De Raedt,et al.
Bayesian Logic Programs
,
2001,
ILP Work-in-progress reports.
[2]
Eugene Charniak,et al.
Statistical language learning
,
1997
.
[3]
Avi Pfeffer,et al.
The Design and Implementation of IBAL: A General-Purpose Probabilistic Language
,
2005
.
[4]
Stuart J. Russell,et al.
General-Purpose MCMC Inference over Relational Structures
,
2006,
UAI.
[5]
Peter Green,et al.
Markov chain Monte Carlo in Practice
,
1996
.
[6]
Kuo-Chu Chang,et al.
Weighing and Integrating Evidence for Stochastic Simulation in Bayesian Networks
,
2013,
UAI.
[7]
Ross D. Shachter,et al.
Simulation Approaches to General Probabilistic Inference on Belief Networks
,
2013,
UAI.