A Vision-Based Algorithm for Landing Unmanned Aerial Vehicles

Autonomous landing is an important part of the autonomous control of unmanned aerial vehicles (UAVs). In this article we present the design and implementation of a vision algorithm for autonomous landing. We use an onboard camera to obtain the image of the ground where the landing target (landmark) is laid and a method based on Zernike moments will be responsible for the target detection and recognition. Then the pose and position of the UAV will be estimated using the image of the landmark. As an important part of the article, the results of experiments on an OpenGL-developed testbed are presented as well as the trial results using simulated flight video sequences to demonstrate the accuracy and efficiency of our algorithm.

[1]  Mohammed Al-Rawi,et al.  Fast Zernike moments , 2008, Journal of Real-Time Image Processing.

[2]  Gaurav S. Sukhatme,et al.  Visually guided landing of an unmanned aerial vehicle , 2003, IEEE Trans. Robotics Autom..

[3]  M. Teague Image analysis via the general theory of moments , 1980 .

[4]  S. Shankar Sastry,et al.  A vision system for landing an unmanned aerial vehicle , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[5]  P. Campoy,et al.  Visual Model Feature Tracking For UAV Control , 2007, 2007 IEEE International Symposium on Intelligent Signal Processing.

[6]  Ming-Kuei Hu,et al.  Visual pattern recognition by moment invariants , 1962, IRE Trans. Inf. Theory.

[7]  J. Todd Book Review: Digital image processing (second edition). By R. C. Gonzalez and P. Wintz, Addison-Wesley, 1987. 503 pp. Price: £29.95. (ISBN 0-201-11026-1) , 1988 .