Self-Consistency and MDL: A Paradigm for Evaluating Point-Correspondence Algorithms, and Its Application to Detecting Changes in Surface Elevation

The self-consistency methodology is a new paradigm for evaluating certain vision algorithms without relying extensively on ground truth. We demonstrate its effectiveness in the case of point-correspondence algorithms and use our approach to predict their accuracy.For point-correspondence algorithms, our methodology consists in applying independently the algorithm to subsets of images obtained by varying the camera geometry while keeping 3-D object geometry constant. Matches that should correspond to the same surface element in 3-D are collected to create statistics that are then used as a measure of the accuracy and reliability of the algorithm. These statistics can then be used to predict the accuracy and reliability of the algorithm applied to new images of new scenes.An effective representation for these statistics is a scatter diagram along two dimensions: A normalized distance and a matching score. The normalized distance make the statistics invariant to camera geometry, while the matching score allows us to predict the accuracy of individual matches. We introduce a new matching score based on Minimum Description Length (MDL) theory, which is shown to be a better predictor of the quality of a match than the traditional Sum of Squared Distance (SSD) score.We demonstrate the potential of our methodology in two different application areas. First, we compare different point-correspondence algorithms, matching scores, and window sizes. Second, we detect changes in terrain elevation between 3-D terrain models reconstructed from two sets of images taken at a different time.We finish by discussing the application of self-consistency to other vision problems.

[1]  K. Sugihara Machine interpretation of line drawings , 1986, MIT Press series in artificial intelligence.

[2]  Martin A. Fischler,et al.  An optimization-based approach to the interpretation of single line drawings as 3D wire frames , 1992, International Journal of Computer Vision.

[3]  Ramakant Nevatia,et al.  Detecting changes in aerial views of man-made structures , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[4]  Robert M. Haralick,et al.  Error propagation in machine vision , 2005, Machine Vision and Applications.

[5]  Takeo Kanade,et al.  A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Paul L. Rosin Thresholding for change detection , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[7]  Pascal Fua,et al.  Self-Consistency: A Novel Approach to Characterizing the Accuracy and Reliability of Point Correspon , 1998, ICCV 1999.

[8]  Pascal Fua,et al.  Object-centered surface reconstruction: Combining multi-image stereo and shading , 1995, International Journal of Computer Vision.

[9]  Peter Meer,et al.  Performance Assessment Through Bootstrap , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Richard Szeliski,et al.  Recovering 3D Shape and Motion from Image Streams Using Nonlinear Least Squares , 1994, J. Vis. Commun. Image Represent..

[11]  Yvan G. Leclerc,et al.  Constructing simple stable descriptions for image partitioning , 1989, International Journal of Computer Vision.

[12]  Pascal Fua,et al.  Combining Stereo and Monocular Information to Compute Dense Depth Maps that Preserve Depth Discontinuities , 1991, IJCAI.

[13]  Robert L. Lillestrand,et al.  Techniques ror Change Detection , 1972, IEEE Transactions on Computers.

[14]  Larry H. Matthies,et al.  Stereo vision for planetary rovers: Stochastic modeling to near real-time implementation , 1991, Optics & Photonics.

[15]  Olivier D. Faugeras,et al.  Characterizing the Uncertainty of the Fundamental Matrix , 1997, Comput. Vis. Image Underst..

[16]  Richard Szeliski,et al.  Prediction error as a quality metric for motion and stereo , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[17]  Andrew Zisserman,et al.  Performance characterization of fundamental matrix estimation under image degradation , 1997, Machine Vision and Applications.

[18]  Nicholas Ayache,et al.  Artificial vision for mobile robots - stereo vision and multisensory perception , 1991 .

[19]  Long Quan,et al.  Relative 3D Reconstruction Using Multiple Uncalibrated Images , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[20]  L. Quam Computer comparison of pictures , 1971 .

[21]  Ramakant Nevatia,et al.  Model Validation for Change Detection , 1994 .

[22]  Pascal Fua,et al.  Quantitative and Qualitative Comparison of Some Area and Feature-based Stereo Algorithms , 1992 .

[23]  C. Qian,et al.  Frame-rate Multi-body Tracking for Surveillance , 1998 .

[24]  Richard Szeliski,et al.  An Experimental Comparison of Stereo Algorithms , 1999, Workshop on Vision Algorithms.

[25]  Ramakant Nevatia,et al.  Detecting changes in aerial views of man-made structures , 2000, Image Vis. Comput..

[26]  Kim L. Boyer,et al.  Quantitative measures of change based on feature organization: eigenvalues and eigenvectors , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[27]  P. Anandan,et al.  A computational framework and an algorithm for the measurement of visual motion , 1987, International Journal of Computer Vision.

[28]  Kim L. Boyer,et al.  Quantitative Measures of Change Based on Feature Organization: Eigenvalues and Eigenvectors , 1998, Comput. Vis. Image Underst..

[29]  Berthold K. P. Horn,et al.  Shape from shading , 1989 .