Navigation Using Images, A Survey of Techniques

The accuracy and availability of Global Navigation Satellite Systems (GNSS) has revolutionized navigation. As a result, world-wide, meter-level positioning is required for many applications. Unfortunately, satellite navigation signals are not available in all environments. To address this issue, researchers have devoted much effort into investigating the use of image sequences for navigation. Many different image-aided navigation techniques have been demonstrated, each with varying assumptions, and most using ad-hoc techniques. This results in little information on how to apply image-aided techniques to problems with differing assumptions. The goal of this article is to characterize the properties of various forms of image-aided navigation. Once the observability is established, additional measurements that can augment weak areas are presented and discussed. The limitations of current image-aided navigation techniques are shown to require additional measurements from a non-homogeneous sensor for reliable, long-term navigation.

[1]  David J. Fleet,et al.  Performance of optical flow techniques , 1994, International Journal of Computer Vision.

[2]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[3]  Peter Cheeseman,et al.  On the Representation and Estimation of Spatial Uncertainty , 1986 .

[4]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[5]  Neil G. Johnson,et al.  Vision-Assisted Control of a Hovering Air Vehicle in an Indoor Setting , 2008 .

[6]  B. McNaughton,et al.  Vestibular and Visual Cues in Navigation: A Tale of Two Cities , 1996, Annals of the New York Academy of Sciences.

[7]  Michael J. Veth,et al.  Passive indoor image-aided inertial attitude estimation using a predictive hough transformation , 2010, IEEE/ION Position, Location and Navigation Symposium.

[8]  Randall Smith,et al.  Estimating uncertain spatial relationships in robotics , 1986, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[9]  Meir Pachter,et al.  Anti-Temporal-Aliasing Constraints for Image-Based Feature Tracking Applications With and Without Inertial Aiding , 2010, IEEE Transactions on Vehicular Technology.

[10]  Clark N. Taylor,et al.  Comparison of Two Image and Inertial Sensor Fusion Techniques for Navigation in Unmapped Environments , 2011, IEEE Transactions on Aerospace and Electronic Systems.

[11]  Michael Veth,et al.  Fusing Low-Cost Image and Inertial Sensors for Passive Navigation , 2007 .

[12]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[13]  M. Veth,et al.  Stochastic constraints for efficient image correspondence search , 2006, IEEE Transactions on Aerospace and Electronic Systems.

[14]  John F. Raquet,et al.  Performance Evaluation of Vision Aided Inertial Navigation System Augmented with a Coded Aperture , 2009 .

[15]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Frédo Durand,et al.  Image and depth from a conventional camera with a coded aperture , 2007, SIGGRAPH 2007.

[17]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[18]  Hugh F. Durrant-Whyte,et al.  Simultaneous map building and localization for an autonomous mobile robot , 1991, Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91.

[19]  S. Shankar Sastry,et al.  An Invitation to 3-D Vision , 2004 .