Challenges and implemented technologies used in autonomous drone racing

Autonomous drone racing (ADR) is a challenge for autonomous drones to navigate a cluttered indoor environment without relying on any external sensing in which all the sensing and computing must be done with onboard resources. Although no team could complete the whole racing track so far, most successful teams implemented waypoint tracking methods and robust visual recognition of the gates of distinct colors because the complete environmental information was given to participants before the events. In this paper, we introduce the purpose of ADR as a benchmark testing ground for autonomous drone technologies and analyze challenges and technologies used in the two previous ADRs held in IROS 2016 and IROS 2017. Five teams which participated in these events present their implemented technologies that cover modified ORB-SLAM, robust alignment method for waypoints deployment, sensor fusion for motion estimation, deep learning for gate detection and motion control, and stereo-vision for gate detection.

[1]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[2]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Antoine Drouin,et al.  The Paparazzi Solution , 2006 .

[4]  G. C. H. E. de Croon,et al.  The DelFly: Design, Aerodynamics, and Artificial Intelligence of a Flapping Wing Robot , 2015 .

[5]  Yu Sun,et al.  The IROS 2016 Competitions [Competitions] , 2017, IEEE Robotics Autom. Mag..

[6]  Antonio Franchi,et al.  Differential Flatness of Quadrotor Dynamics Subject to Rotor Drag for Accurate Tracking of High-Speed Trajectories , 2017, IEEE Robotics and Automation Letters.

[7]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[8]  J. Martínez-Carranza,et al.  Metric monocular SLAM and colour segmentation for multiple obstacle avoidance in autonomous flight , 2017, 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS).

[9]  Sergey Ioffe,et al.  Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.

[10]  Gérard G. Medioni,et al.  Object modelling by registration of multiple range images , 1992, Image Vis. Comput..

[11]  David Hyunchul Shim,et al.  A direct visual servoing‐based framework for the 2016 IROS Autonomous Drone Racing Challenge , 2018, J. Field Robotics.

[12]  James D. Foley,et al.  Fundamentals of interactive computer graphics , 1982 .

[13]  David Hyunchul Shim,et al.  Perception, Guidance, and Navigation for Indoor Autonomous Drone Racing Using Deep Learning , 2018, IEEE Robotics and Automation Letters.

[14]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.