Anytime Inference in Probabilistic Logic Programs with Tp-Compilation

Existing techniques for inference in probabilistic logic programs are sequential: they first compute the relevant propositional formula for the query of interest, then compile it into a tractable target representation and finally, perform weighted model counting on the resulting representation. We propose TP-compilation, a new inference technique based on forward reasoning. TP-compilation proceeds incrementally in that it interleaves the knowledge compilation step for weighted model counting with forward reasoning on the logic program. This leads to a novel anytime algorithm that provides hard bounds on the inferred probabilities. Furthermore, an empirical evaluation shows that TP- compilation effectively handles larger instances of complex real-world problems than current sequential approaches, both for exact and for anytime approximate inference.

[1]  Pedro M. Domingos,et al.  Deep transfer via second-order Markov logic , 2009, ICML '09.

[2]  Dan Suciu,et al.  Journal of the ACM , 2006 .

[3]  J. A. Salvato John wiley & sons. , 1994, Environmental science & technology.

[4]  F. Pfenning Theory and Practice of Logic Programming , 2014 .

[5]  Taisuke Sato,et al.  A Statistical Learning Method for Logic Programs with Distribution Semantics , 1995, ICLP.

[6]  Luc De Raedt,et al.  ProbLog: A Probabilistic Prolog and its Application in Link Discovery , 2007, IJCAI.

[7]  Stuart J. Russell,et al.  BLOG: Probabilistic Models with Unknown Objects , 2005, IJCAI.

[8]  Robert A. Kowalski,et al.  The Semantics of Predicate Logic as a Programming Language , 1976, JACM.

[9]  Fabrizio Riguzzi,et al.  The PITA system: Tabling and answer subsumption for reasoning under uncertainty , 2011, Theory Pract. Log. Program..

[10]  Luc De Raedt,et al.  Under Consideration for Publication in Theory and Practice of Logic Programming the Magic of Logical Inference in Probabilistic Programming , 2022 .

[11]  Luc De Raedt,et al.  Explanation-Based Approximate Weighted Model Counting for Probabilistic Logics , 2014, AAAI.

[12]  Adnan Darwiche,et al.  Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence SDD: A New Canonical Representation of Propositional Knowledge Bases , 2022 .

[13]  Roded Sharan,et al.  SPINE: a framework for signaling-regulatory pathway inference from cause-effect experiments , 2007, ISMB/ECCB.

[14]  Stuart J. Russell,et al.  Probabilistic models with unknown objects , 2006 .

[15]  Adnan Darwiche,et al.  New Advances in Compiling CNF into Decomposable Negation Normal Form , 2004, ECAI.

[16]  Peter J. Stuckey,et al.  Stable Model Counting and Its Application in Probabilistic Logic Programming , 2014, AAAI.

[17]  Dov M. Gabbay,et al.  Proceedings of 16th European Conference on Artificial Intelligence, ECAI 2004 , 2004 .

[18]  Luc De Raedt,et al.  Inference and learning in probabilistic logic programs using weighted Boolean formulas , 2013, Theory and Practice of Logic Programming.

[19]  Guy Van den Broeck,et al.  k-Optimal: a novel approximate inference algorithm for ProbLog , 2012, Machine Learning.

[20]  Pedro M. Domingos,et al.  Sound and Efficient Inference with Probabilistic and Deterministic Dependencies , 2006, AAAI.

[21]  Joshua B. Tenenbaum,et al.  Church: a language for generative models , 2008, UAI.

[22]  Adnan Darwiche,et al.  On probabilistic inference by weighted model counting , 2008, Artif. Intell..

[23]  David Poole,et al.  The Independent Choice Logic and Beyond , 2008, Probabilistic Inductive Logic Programming.

[24]  David Poole,et al.  Logic programming, abduction and probability , 1993, New Generation Computing.