On the positioning of multisensor imagery for exploitation and target recognition

Modern image exploitation tasks have evolved from the early single-image, pixel-based and model-less methods to the current multi-image, multisensor, multiplatform, and model-based approaches. In this context, image positioning, which is the process of establishing the precise geometric relationship of an acquired image to the three-dimensional (3-D) world, has become an enabling technique for state-of-the-art multisensor data exploitation. Precise image positioning provides several benefits. Image registration, traditionally formulated as an image-to-image alignment problem, can now be carried out in accordance with interior and exterior sensor geometries. Images from sensors in arbitrary locations and orientations can be positioned with respect to a focal vertical and geocentric coordinate systems. This paper presents techniques for positioning images derived from various sensors such as electro-optical (E-O), synthetic aperture radar (SAR), and interferometric synthetic aperture radar (IFSAR). Applications to model-supported image exploitation are also discussed.

[1]  B. D. F. Methley,et al.  Computational models in surveying and photogrammetry , 1986 .

[2]  Anthony Hoogs,et al.  RADIUS common development environment , 1992, Other Conferences.

[3]  Rama Chellappa,et al.  A computational vision approach to image registration , 1993, IEEE Trans. Image Process..

[4]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[5]  Yiannis Aloimonos,et al.  Perspective approximations , 1990, Image Vis. Comput..

[6]  Thomas M. Strat,et al.  Context-Based Vision: Recognizing Objects Using Information from Both 2D and 3D Imagery , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  Richard W. Ely,et al.  Model-supported positioning , 1995, Defense, Security, and Sensing.

[8]  Michel Roux,et al.  Feature matching for building extraction from multiple views , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  B. Bhanu,et al.  Image understanding research for automatic target recognition , 1993, IEEE Aerospace and Electronic Systems Magazine.

[10]  Rama Chellappa,et al.  Automatic target recognition using passive infrared and laser radar sensors , 1995 .

[11]  N. R. Corby Image understanding research at GE , 1989 .

[12]  Edward M. Riseman,et al.  Hybrid weak-perspective and full-perspective matching , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[13]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[14]  Thomas S. Huang,et al.  Uniqueness and Estimation of Three-Dimensional Motion Parameters of Rigid Objects with Curved Surfaces , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Robert M. Haralick,et al.  Determining camera parameters from the perspective projection of a rectangle , 1989, Pattern Recognit..

[16]  N. Nandhakumar,et al.  Multisensor fusion for automatic scene interpretation , 1989 .

[17]  Ramakant Nevatia,et al.  Model Validation for Change Detection , 1994 .

[18]  Larry S. Davis,et al.  Site model supported monitoring of aerial images , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[19]  George Wolberg,et al.  Digital image warping , 1990 .

[20]  John C. Curlander,et al.  Synthetic Aperture Radar: Systems and Signal Processing , 1991 .