New Rules for Domain Independent Lifted MAP Inference

Lifted inference algorithms for probabilistic first-order logic frameworks such as Markov logic networks (MLNs) have received significant attention in recent years. These algorithms use so called lifting rules to identify symmetries in the first-order representation and reduce the inference problem over a large probabilistic model to an inference problem over a much smaller model. In this paper, we present two new lifting rules, which enable fast MAP inference in a large class of MLNs. Our first rule uses the concept of single occurrence equivalence class of logical variables, which we define in the paper. The rule states that the MAP assignment over an MLN can be recovered from a much smaller MLN, in which each logical variable in each single occurrence equivalence class is replaced by a constant (i.e., an object in the domain of the variable). Our second rule states that we can safely remove a subset of formulas from the MLN if all equivalence classes of variables in the remaining MLN are single occurrence and all formulas in the subset are tautology (i.e., evaluate to true) at extremes (i.e., assignments with identical truth value for groundings of a predicate). We prove that our two new rules are sound and demonstrate via a detailed experimental evaluation that our approach is superior in terms of scalability and MAP solution quality to the state of the art approaches.

[1]  Bart Selman,et al.  Referral Web: combining social networks and collaborative filtering , 1997, CACM.

[2]  Heiner Stuckenschmidt,et al.  RockIt: Exploiting Parallelism and Symmetry for MAP Inference in Statistical Relational Models , 2013, AAAI.

[3]  Dan Suciu,et al.  Lifted Inference Seen from the Other Side : The Tractable Features , 2010, NIPS.

[4]  Pedro M. Domingos,et al.  Lifted First-Order Belief Propagation , 2008, AAAI.

[5]  Hung Hai Bui,et al.  Automorphism Groups of Graphical Models and Lifted Variational Inference , 2012, UAI.

[6]  Dan Roth,et al.  MPE and Partial Inversion in Lifted Probabilistic Variable Elimination , 2006, AAAI.

[7]  Pedro M. Domingos,et al.  Probabilistic theorem proving , 2011, UAI.

[8]  Peter Norvig,et al.  Artificial intelligence - a modern approach, 2nd Edition , 2003, Prentice Hall series in artificial intelligence.

[9]  Stuart J. Russell,et al.  Artificial Intelligence , 1986 .

[10]  Pedro M. Domingos,et al.  Markov Logic: An Interface Layer for Artificial Intelligence , 2009, Markov Logic: An Interface Layer for Artificial Intelligence.

[11]  Kristian Kersting,et al.  Efficient Lifting of MAP LP Relaxations Using k-Locality , 2014, AISTATS.

[12]  Guy Van den Broeck,et al.  Tractability through Exchangeability: A New Perspective on Efficient Probabilistic Inference , 2014, AAAI.

[13]  David Poole,et al.  First-order probabilistic inference , 2003, IJCAI.

[14]  Vibhav Gogate,et al.  On Lifting the Gibbs Sampling Algorithm , 2012, StarAI@UAI.

[15]  Kristian Kersting,et al.  Counting Belief Propagation , 2009, UAI.

[16]  Kristian Kersting,et al.  Lifted Probabilistic Inference , 2012, ECAI.

[17]  Guy Van den Broeck On the Completeness of First-Order Knowledge Compilation for Lifted Probabilistic Inference , 2011, NIPS.

[18]  Vibhav Gogate,et al.  Advances in Lifted Importance Sampling , 2012, AAAI.

[19]  Somdeb Sarkhel,et al.  Lifted MAP Inference for Markov Logic Networks , 2014, AISTATS.

[20]  Matthew Richardson,et al.  The Alchemy System for Statistical Relational AI: User Manual , 2007 .

[21]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[22]  Dan Roth,et al.  Lifted First-Order Probabilistic Inference , 2005, IJCAI.