The Role of Theory in the Evaluation of Image Motion Algorithms

Undeniably, the numerical evaluation of Computer Vision algorithms is of utmost importance. However, often neglected is the role of theoretical knowledge to interpret the numerical performance of those algorithms. In addition, the lack of theoretical research in Computer Vision has long been recognized. In this contribution, we demonstrate that extended theoretical knowledge of a phenomenon enables one to design algorithms that are better suited for the task at hand and to evaluate the theoretical assumptions of other, similar algorithms. For instance, the problem posed by multiple image motions was poorly understood in the frequency domain yet frequency-based multiple motions algorithms were developed. We present algorithms for computing multiple image motions arising from occlusion and translucency which are capable of extracting the information-content of occlusion boundaries and distinguish between those and additive translucency phenomena. These algorithms are based on recent theoretical results on occlusion in the frequency domain and demonstrate that a complete theoretical understanding of a phenomenon is required in order to design adequate algorithms. We conclude by proposing an evaluation protocol which includes theoretical considerations and their influence on the numerical evaluation of algorithms.

[1]  Bernd Jähne Motion determination in space-time images , 1989 .

[2]  H. C. Longuet-Higgins,et al.  A computer algorithm for reconstructing a scene from two projections , 1981, Nature.

[3]  P. Pirsch,et al.  Advances in picture coding , 1985, Proceedings of the IEEE.

[4]  Hans-Hellmut Nagel,et al.  On the Estimation of Optical Flow: Relations between Different Approaches and Some New Results , 1987, Artif. Intell..

[5]  David W. Murray,et al.  Scene Segmentation from Visual Motion Using Global Optimization , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[6]  Michael J. Black,et al.  Mixture models for optical flow computation , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Suzanne Beauchemin,et al.  The Local Frequency Structure of 1D Occluding Image Signals , 1997 .

[8]  M. Shizawa,et al.  Principle of superposition: a common computational framework for analysis of multiple motion , 1991, Proceedings of the IEEE Workshop on Visual Motion.

[9]  Thomas O. Binford,et al.  Ignorance, myopia, and naiveté in computer vision systems , 1991, CVGIP Image Underst..

[10]  Jerry L. Prince,et al.  Motion estimation from tagged MR image sequences , 1992, IEEE Trans. Medical Imaging.

[11]  Ian Overington Gradient-Based Flow Segmentation and Location of the Focus of Expansion , 1987, Alvey Vision Conference.

[12]  John K. Tsotsos,et al.  Techniques for disparity measurement , 1991, CVGIP Image Underst..

[13]  Edward H. Adelson,et al.  Layered representation for motion analysis , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Brian G. Schunck,et al.  Image Flow Segmentation and Estimation by Constraint Line Clustering , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Michael J. Black,et al.  Estimating Optical Flow in Segmented Images Using Variable-Order Parametric Models With Local Deformations , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Steven S. Beauchemin,et al.  The computation of optical flow , 1995, CSUR.