Detection and characterization of multiple motion points

The computation of optical flow is a well studied topic in biological and computational vision. However, the existence of multiple motions in dynamic imagery due to occlusion or even transparency still raises challenging questions. In this paper, we propose an approach for the detection and characterization of occlusion and transparency. We propose a theoretical framework for both types of multiple motions which explicitly shows the difference between occlusion and transparency in the frequency domain. Then, we employ an EM-algorithm for the computation of one or two image velocities and a simple test for the detection of occlusion. Our approach differs from other EM-approaches which blindly assume the superposition of two models in the spatial domain without providing with a separate formal model for occlusion. We test and compare the characterization performance on synthetic and real data.

[1]  Hans Knutsson,et al.  Signal processing for computer vision , 1994 .

[2]  E H Adelson,et al.  Spatiotemporal energy models for the perception of motion. , 1985, Journal of the Optical Society of America. A, Optics and image science.

[3]  Steven S. Beauchemin,et al.  A theory of occlusion in the context of optical flow , 1996, TFCV.

[4]  Michael J. Black,et al.  Mixture Models for Image Representation , 1996 .

[5]  Steven S. Beauchemin,et al.  The computation of optical flow , 1995, CSUR.

[6]  Hanno Scharr,et al.  Study of Dynamical Processes with Tensor-Based Spatiotemporal Image Processing Techniques , 1998, ECCV.

[7]  George T. Chou,et al.  A model of figure-ground segregation from kinetic occlusion , 1995, Proceedings of IEEE International Conference on Computer Vision.

[8]  Jitendra Malik,et al.  Motion segmentation and tracking using normalized cuts , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[9]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[10]  Kenji Mase,et al.  Unified computational theory for motion transparency and motion boundaries based on eigenenergy analysis , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[11]  Josef Kittler,et al.  A Gradient-Based Method for General Motion Estimation and Segmentation , 1993, J. Vis. Commun. Image Represent..

[12]  Patrick Bouthemy,et al.  A Maximum Likelihood Framework for Determining Moving Edges , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Michael J. Black,et al.  Mixture models for optical flow computation , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Hans-Hellmut Nagel,et al.  Spatiotemporally Adaptive Estimation and Segmenation of OF-Fields , 1998, ECCV.

[15]  Luc Van Gool,et al.  Determination of Optical Flow and its Discontinuities using Non-Linear Diffusion , 1994, ECCV.

[16]  Hans-Hellmut Nagel,et al.  An Investigation of Smoothness Constraints for the Estimation of Displacement Vector Fields from Image Sequences , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  David J. Fleet,et al.  Motion feature detection using steerable flow fields , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[18]  Shmuel Peleg,et al.  A Three-Frame Algorithm for Estimating Two-Component Image Motion , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Edward H. Adelson,et al.  A unified mixture framework for motion segmentation: incorporating spatial coherence and estimating the number of models , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[20]  Michael J. Black,et al.  The Robust Estimation of Multiple Motions: Parametric and Piecewise-Smooth Flow Fields , 1996, Comput. Vis. Image Underst..

[21]  Keith Langley,et al.  Computational analysis of non-Fourier motion , 1994, Vision Research.