Bayesian Propagation for Perceiving Moving Objects

In this paper we address the issue of how form and motion can be integrated in order to provide suitable information to attentively track multiple moving objects. Such integration is designed in a Bayesian framework, and a Belief Propagation technique is exploited to perform coherent form/motion labeling of regions of the observed scene. Experiments on both synthetic and real data are presented and discussed.

[1]  J. Raymond Attentional modulation of visual motion perception , 2000, Trends in Cognitive Sciences.

[2]  Brendan J. Frey,et al.  A comparison of algorithms for inference and learning in probabilistic graphical models , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Edward H. Adelson,et al.  A unified mixture framework for motion segmentation: incorporating spatial coherence and estimating the number of models , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[4]  Kenneth H. Britten,et al.  Motion perception: How are moving images segmented? , 1999, Current Biology.

[5]  David C. Knill,et al.  Introduction: a Bayesian formulation of visual perception , 1996 .

[6]  T J Sejnowski,et al.  A Model for Encoding Multiple Object Motions and Self-Motion in Area MST of Primate Visual Cortex , 1998, The Journal of Neuroscience.

[7]  Giuseppe Boccignone,et al.  Diffused expectation maximisation for image segmentation , 2004 .

[8]  William T. Freeman,et al.  Understanding belief propagation and its generalizations , 2003 .

[9]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[10]  Shun-ichi Amari,et al.  Information geometry of the EM and em algorithms for neural networks , 1995, Neural Networks.

[11]  BlakeAndrew,et al.  C ONDENSATION Conditional Density Propagation forVisual Tracking , 1998 .

[12]  Michael Isard,et al.  CONDENSATION—Conditional Density Propagation for Visual Tracking , 1998, International Journal of Computer Vision.

[13]  S. Grossberg,et al.  A neural model of motion processing and visual navigation by cortical area MST. , 1999, Cerebral cortex.

[14]  Zenon W. Pylyshyn,et al.  Situating vision in the world , 2000, Trends in Cognitive Sciences.

[15]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[16]  D. Ballard,et al.  Task constraints in visual working memory , 1997, Vision Research.

[17]  Nuno Vasconcelos,et al.  Empirical Bayesian Motion Segmentation , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  Gerhard Lakemeyer,et al.  Exploring artificial intelligence in the new millennium , 2003 .

[19]  William T. Freeman,et al.  Learning Low-Level Vision , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[20]  Paolo Napoletano,et al.  A Bayesian Approach to Situated Vision , 2005, BVAI.

[21]  Tai Sing Lee,et al.  Hierarchical Bayesian inference in the visual cortex. , 2003, Journal of the Optical Society of America. A, Optics, image science, and vision.