Filling in scenes by propagating probabilities through layers and into appearance models

Inferring the identities and positions of multiple occluding objects in a noisy image is a difficult problem, even when the shapes and appearances of the allowable objects are known. Methods that detect and analyze shape features, occlusion boundaries and optical flow break down when the image is noisy. In situations where we know the boundaries and appearances of the allowable objects, a brute force method can be used to perform MAP inference. If there are K possible objects (including translations, etc.) in up to L layers, the number of possible configurations of the scene is K/sup L/, so exact inference is intractable for large numbers of objects and reasonably large numbers of layers. We construct a Bayesian network that describes the occlusion process and we use iterative probability propagation to approximately recover the identities and positions of the objects in the scene in time that is linear in K and L. Although iterative probability propagation is an approximate inference technique, it was recently used to obtain the world record in error-correcting decoding. Experiments show that when one explanation of the scene is most probable, the algorithm finds the solution. For a small problem, we show that as the number of iterations increases, iterative probability propagation performs better than a greedy technique and becomes closer to the exact MAP algorithm. Quite surprisingly, we also find that when the order of occlusion is ambiguous, the output of the algorithm may oscillate between plausible interpretations of the scene.

[1]  D.J.C. MacKay,et al.  Good error-correcting codes based on very sparse matrices , 1997, Proceedings of IEEE International Symposium on Information Theory.

[2]  William T. Freeman,et al.  Learning low-level vision , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[3]  Geoffrey E. Hinton,et al.  A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.

[4]  Yair Weiss,et al.  Correctness of Local Probability Propagation in Graphical Models with Loops , 2000, Neural Computation.

[5]  David J. Spiegelhalter,et al.  Local computations with probabilities on graphical structures and their application to expert systems , 1990 .

[6]  Brendan J. Frey,et al.  Transformed hidden Markov models: estimating mixture models of images and inferring spatial transformations in video sequences , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[7]  Alain Glavieux,et al.  Reflections on the Prize Paper : "Near optimum error-correcting coding and decoding: turbo codes" , 1998 .