On the fixed points of the max-product algorithm

Graphical models, such as Bayesian networks and Markov random fields represent statistical dependencies of variables by a graph. The max-product b̈elief propagation̈ algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixed-point is guaranteed to yields the most probable a posteriori (MAP) values of the unobserved variables given the observed ones. Here we prove a result on the fixed points of max-product on a graph with arbitrary toplogy and with arbitrary probability distributions (discrete or continuous valued nodes). We show that the assignment based on the fixed-point is a n̈eighborhood maximum̈ of the posterior probability: the posterior probability of the max-product assignment is guaranteed to be greater than all other assignments in a particular large region around that assignment. The region includes all assignments that differ from the max-product assignment in any subset of nodes that form no more than a single loop in the graph. In some graphs this neighborhood is exponentially large. We illustrate the analysis with examples. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c ©Mitsubishi Electric Research Laboratories, Inc., 1999 201 Broadway, Cambridge, Massachusetts 02139

[1]  Robert G. Gallager,et al.  Low-density parity-check codes , 1962, IRE Trans. Inf. Theory.

[2]  J. Munkres Topology : a first course / James R. Munkres , 1975 .

[3]  Donald Geman,et al.  Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Dimitri P. Bertsekas,et al.  Dynamic Programming: Deterministic and Stochastic Models , 1987 .

[5]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[6]  Solomon Eyal Shimony,et al.  Finding MAPs for Belief Networks is NP-Hard , 1994, Artif. Intell..

[7]  Dariush Divsalar,et al.  Soft-Output Decoding Algorithms in Iterative Decoding of Turbo Codes , 1996 .

[8]  Steffen L. Lauritzen,et al.  Graphical models in R , 1996 .

[9]  Niclas Wiberg,et al.  Codes and Decoding on General Graphs , 1996 .

[10]  Edward H. Adelson,et al.  Belief Propagation and Revision in Networks with Loops , 1997 .

[11]  D.J.C. MacKay,et al.  Good error-correcting codes based on very sparse matrices , 1997, Proceedings of IEEE International Symposium on Information Theory.

[12]  B. Frey,et al.  Skewness and pseudocodewords in iterative decoding , 1998, Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252).

[13]  R.J. McEliece,et al.  Iterative decoding on graphs with a single cycle , 1998, Proceedings. 1998 IEEE International Symposium on Information Theory (Cat. No.98CH36252).

[14]  William T. Freeman,et al.  Learning to Estimate Scenes from Images , 1998, NIPS.

[15]  G. Horn Iterative decoding and pseudo-codewords , 1999 .

[16]  Michael I. Jordan,et al.  Loopy Belief Propagation for Approximate Inference: An Empirical Study , 1999, UAI.

[17]  Robert J. McEliece,et al.  The generalized distributive law , 2000, IEEE Trans. Inf. Theory.

[18]  Yair Weiss,et al.  Correctness of Local Probability Propagation in Graphical Models with Loops , 2000, Neural Computation.

[19]  William T. Freeman,et al.  Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology , 1999, Neural Computation.

[20]  X. Jin Factor graphs and the Sum-Product Algorithm , 2002 .