A COMPARISON AND COMBINATION OF METHODS FOR CO-REGISTRATION OF MULTI-MODAL IMAGES

ABSTRACT: Combined usage and analysis of images from different sensors for various applications, including disaster monitoring, often needs first an image co-registration. Co-registration is based on automated matching of corresponding image features (e.g. just 10-40) and becomes very difficult when the images differ a lot. Perhaps the most difficult case is that of co-registering optical and SAR images, while also multispectral images can vary a lot. In difficult co-registration cases, the point correspondences are mostly wrong. Thus, an additional problem is to find and automatically eliminate matching errors. In this paper, we present and compare various matching methods and show some benefits when combining them. The two main methods used include Mutual Information (MI) and a Discrete Fourier Transform method (FTCC). We extend the methods to also estimate the matching quality and exclude blunders, showing also results on that. Additionally, in one test a previously used Least Squares Matching (LSM) method was employed. Our test data include SAR and optical images, from the Tohoku Earthquake and Tsunami, 2011 in Japan and a region in Thun, Switzerland. Since there was no ground truth, the results for Tohoku were checked visually. As an extra test dataset we use very different AVHRR multispectral images, which are already coregistered. By introducing known geometric distortions we can perform quantitative evaluations with this data. Both MI and FTCC show a quite robust performance, with FTCC showing generally more blunders, but, when points are correct, slightly better accuracy. Matching quality evaluation and less matching method combination reduce the number of blunders very significantly to almost complete elimination.