Vision Guided Landing of an Autonomous Helicopter in Hazardous Terrain

Future robotic space missions will employ a precision soft-landing capability that will enable exploration of previously inaccessible sites that have strong scientific significance. To enable this capability, a fully autonomous onboard system that identifies and avoids hazardous features such as steep slopes and large rocks is required. Such a system will also provide greater functionality in unstructured terrain to unmanned aerial vehicles. This paper describes an algorithm for landing hazard avoidance based on images from a single moving camera. The core of the algorithm is an efficient application of structure from motion to generate a dense elevation map of the landing area. Hazards are then detected in this map and a safe landing site is selected. The algorithm has been implemented on an autonomous helicopter testbed and demonstrated four times resulting in the first autonomous landing of an unmanned helicopter in unknown and hazardous terrain.

[1]  H. Blum Biological shape and visual science (part I) , 1973 .

[2]  H. Blum Biological shape and visual science. I. , 1973, Journal of theoretical biology.

[3]  Yoram Yakimovsky,et al.  A system for extracting three-dimensional measurements from a stereo pair of TV cameras , 1976 .

[4]  Saied Moezzi,et al.  Dynamic stereo vision , 1992 .

[5]  Narendra Ahuja,et al.  Optimal Motion and Structure Estimation , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[7]  W.C. Karl,et al.  A vision augmented navigation system , 1997, Proceedings of Conference on Intelligent Transportation Systems.

[8]  George A. Bekey,et al.  Learning helicopter control through "teaching by showing" , 1998, Proceedings of the 37th IEEE Conference on Decision and Control (Cat. No.98CH36171).

[9]  Pietro Perona,et al.  Real-time 2-D feature detection on a reconfigurable computer , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[10]  Gaurav S. Sukhatme,et al.  Circumventing dynamic modeling: evaluation of the error-state Kalman filter applied to mobile robot localization , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[11]  Takeo Kanade,et al.  A visual odometer for autonomous helicopter flight , 1999, Robotics Auton. Syst..

[12]  Andrew E. Johnson,et al.  Lidar-Based Hazard Avoidance for Safe Landing on Mars , 2002 .

[13]  Stergios I. Roumeliotis,et al.  Augmenting inertial navigation with image-based motion estimation , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[14]  S. Shankar Sastry,et al.  Multiple view motion estimation and control for landing an unmanned aerial vehicle , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[15]  Gaurav S. Sukhatme,et al.  Towards vision-based safe landing for an autonomous helicopter , 2002, Robotics Auton. Syst..

[16]  John Oliensis Exact Two-Image Structure from Motion , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Gaurav S. Sukhatme,et al.  Visually guided landing of an unmanned aerial vehicle , 2003, IEEE Trans. Robotics Autom..

[18]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..