Initial Empirical Evaluation of Anytime Lifted Belief Propagation

Lifted first-order probabilistic inference, which manipulates first-order representations of graphical models directly, has been receiving increasing attention. Most lifted inference methods to date need to process the entire given model before they can provide information on a query’s answer, even if most of it is determined by a relatively small, local portion of the model. Anytime Lifted Belief Propagation (ALBP) performs Lifted Belief Propagation but, instead of first building a supernode network based on the entire model, incrementally processes the model on an as-needed basis, keeping a guaranteed bound on the query’s answer the entire time. This allows a user to either detect when the answer has been already determined, before actually processing the entire model, or to choose to stop when the bound is narrow enough for the application at hand. Moreover, the bounds can be made to converge to the exact solution when inference has processed the entire model. This paper shows some preliminary results of an implementation of ALBP, illustrating how bounds can sometimes be narrowed a lot sooner than it would take to get the exact answer.