Self-calibration of a camera using multiple images

The problem of calibrating cameras is extremely important in computer vision. Existing work is based on the use of a calibration pattern whose 3D model is known a priori. The authors present a complete method for calibrating a camera, which requires only point matches from image sequences. The authors show, using experiments with noisy data, that it is possible to calibrate a camera just by pointing it at the environment, selecting points of interests, and tracking them in the image while moving the camera with an unknown motion. The camera calibration is computed in two steps. In the first step the epipolar transformation is found via the estimation of the fundamental matrix. The second step of the computation uses the so-called Kruppa equations, which link the epipolar transformation to the intrinsic parameters. These equations are integrated in an iterative filtering scheme.<<ETX>>