Precision UAV Landing in Unstructured Environments

Autonomous landing of a drone is a necessary part of autonomous flight. One way to have high certainty of safety in landing is to return to the same location the drone took-off from. Implementations of return-to-home functionality fall short when relying solely on GPS or odometry as inaccuracies in the measurements and drift in the state estimate guides the drone to a position with a large offset from the initial position. This can be particularly dangerous if the drone took-off next to something like a body of water. Current work on precision landing relies on localizing to a known landing pattern, which requires the pilot to carry a landing pattern with them. We propose a method using a downward facing fisheye lens camera to accurately land a UAV from where it took off on an unstructured surface, without a landing pattern. Specifically, this approach uses a position estimate relative to the take-off path of the drone to guide the drone back. With the large Field-of-View provided by the fisheye lens, our algorithm can provide visual feedback starting with a large position error at the beginning of the landing, until 25 cm above the ground at the end of the landing. This algorithm empirically shows it can correct the drift error in the state estimation and land with an accuracy of 40 cm.

[1]  Timothy D. Barfoot,et al.  Robust Monocular Visual Teach and Repeat Aided by Local Ground Planarity and Color-constant Imagery , 2017, J. Field Robotics.

[2]  Wei Bai,et al.  Visual landing system of UAV based on ADRC , 2017, 2017 29th Chinese Control And Decision Conference (CCDC).

[3]  Sebastian Scherer,et al.  Infrastructure-free shipdeck tracking for autonomous landing , 2013, 2013 IEEE International Conference on Robotics and Automation.

[4]  Simone Duranti,et al.  Autonomous Landing of an Unmanned Helicopter based on Vision and Inertial Sensing , 2004, ISER.

[5]  George K. I. Mann,et al.  Appearance-Based Visual-Teach-And-Repeat Navigation Technique for Micro Aerial Vehicle , 2016, J. Intell. Robotic Syst..

[6]  François Chaumette,et al.  Visual servo control. I. Basic approaches , 2006, IEEE Robotics & Automation Magazine.

[7]  Angela P. Schoellig,et al.  A Proof-of-Concept Demonstration of Visual Teach and Repeat on a Quadrocopter Using an Altitude Sensor and a Monocular Camera , 2014, 2014 Canadian Conference on Computer and Robot Vision.

[8]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[9]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[10]  Andreas Zell,et al.  An Onboard Monocular Vision System for Autonomous Takeoff, Hovering and Landing of a Micro Aerial Vehicle , 2012, Journal of Intelligent & Robotic Systems.

[11]  Seth Hutchinson,et al.  Visual Servo Control Part I: Basic Approaches , 2006 .

[12]  Matthew J. Rutherford,et al.  Real-time, GPU-based pose estimation of a UAV for autonomous takeoff and landing , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[13]  François Chaumette,et al.  Visual servo control. II. Advanced approaches [Tutorial] , 2007, IEEE Robotics & Automation Magazine.

[14]  Marc Pollefeys,et al.  An open source and open hardware embedded metric optical flow CMOS camera for indoor and outdoor applications , 2013, 2013 IEEE International Conference on Robotics and Automation.

[15]  David Hyunchul Shim,et al.  Outdoor autonomous landing on a moving platform for quadrotors using an omnidirectional camera , 2014, 2014 International Conference on Unmanned Aircraft Systems (ICUAS).

[16]  P. Schönemann,et al.  A generalized solution of the orthogonal procrustes problem , 1966 .

[17]  Iasonas Kokkinos,et al.  Discriminative Learning of Deep Convolutional Feature Point Descriptors , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[18]  Juho Kannala,et al.  Geometric Camera Calibration , 2008, Wiley Encyclopedia of Computer Science and Engineering.

[19]  Yong Zhang,et al.  Research on computer vision-based for UAV autonomous landing on a ship , 2009, Pattern Recognit. Lett..

[20]  Gaurav S. Sukhatme,et al.  Visually guided landing of an unmanned aerial vehicle , 2003, IEEE Trans. Robotics Autom..

[21]  Wei Feng,et al.  SPHORB: A Fast and Robust Binary Feature on the Sphere , 2014, International Journal of Computer Vision.

[22]  Cheng Hui,et al.  Autonomous takeoff, tracking and landing of a UAV on a moving UGV using onboard monocular vision , 2013, Proceedings of the 32nd Chinese Control Conference.

[23]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[24]  Christopher Hunt,et al.  Notes on the OpenSURF Library , 2009 .

[25]  Tim D. Barfoot,et al.  Monocular Visual Teach and Repeat Aided by Local Ground Planarity , 2015, FSR.

[26]  Roland Siegwart,et al.  A Toolbox for Easily Calibrating Omnidirectional Cameras , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[27]  Juho Kannala,et al.  A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[28]  Jianwei Zhang,et al.  Vision-based autonomous landing system for unmanned aerial vehicle: A survey , 2014, 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI).

[29]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[30]  Simon Baker,et al.  Lucas-Kanade 20 Years On: A Unifying Framework , 2004, International Journal of Computer Vision.