Terrain mapping and landing operations using vision based navigation systems

This paper documents our recent end-to-end demonstration of a unique simultaneous localization and mapping experiment. As a part of the experiment, an omni-directional robotic system equipped with a stereo-camera acts as scout agent. The image streams thus acquired are autonomously acquired and processed by a state of the art computational vision pipeline. The pipeline subsequently generates three dimensional models of the unstructured terrain in question to designed accuracies. A rigorously linear algorithm is proposed to attain fast and ecient computations of relative navigation hypotheses. The hypotheses are subsequently used in an outer-loop statistical decision process to compute the best relative motion model while simultaneously deriving error metrics of the motion model. The model data thus obtained is used in the determination of a \safe" landing area for an unmanned air vehicle (quadroter). Upon relays, this information is used for safe landing of a quadroter, concluding the experiment. Three dimensional models of the scene are rendered along with the relative navigation solutions of the platform motion. Experimental results obtained in the experiment indicate a high degree of optimism on the realization of vision based navigation systems (passive) in routine state of practice for autonomous landing and navigation.

[1]  Jan-Michael Frahm,et al.  Detailed Real-Time Urban 3D Reconstruction from Video , 2007, International Journal of Computer Vision.

[2]  S. Shankar Sastry,et al.  An Invitation to 3-D Vision , 2004 .

[3]  Paul J. Besl,et al.  A Method for Registration of 3-D Shapes , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[5]  Pascal Fua,et al.  A parallel stereo algorithm that produces dense depth maps and preserves image features , 1993, Machine Vision and Applications.

[6]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[7]  J. Junkins,et al.  Analytical Mechanics of Space Systems , 2003 .

[8]  Mason A. Peck,et al.  Historical review of air-bearing spacecraft simulators , 2003 .

[9]  Stefan Bieniawski,et al.  Vehicle Swarm Rapid Prototyping Testbed , 2009 .

[10]  S. Sastry,et al.  Autonomous Exploration in Unknown Urban Environments for Unmanned Aerial Vehicles , 2005 .

[11]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[12]  Efstathios Velenis,et al.  Designing a low-cost spacecraft simulator , 2003 .

[13]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[14]  David Nistér,et al.  Preemptive RANSAC for live structure and motion estimation , 2005, Machine Vision and Applications.

[15]  Jan-Michael Frahm,et al.  A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus , 2008, ECCV.

[16]  Piotr Jasiobedzki,et al.  Stereo-vision-based 3D modeling of space structures , 2007, SPIE Defense + Commercial Sensing.

[17]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[18]  Masayoshi Wada,et al.  Holonomic and omnidirectional vehicle with conventional tires , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[19]  Richard Szeliski,et al.  A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms , 2001, International Journal of Computer Vision.

[20]  David W. Miller,et al.  Autonomous docking algorithm development and experimentation using the SPHERES testbed , 2004, SPIE Defense + Commercial Sensing.

[21]  James Doebbler,et al.  Small body proximity sensing with a novel HD3D LADAR system , 2011 .

[22]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.