AutoSoC: Automating Algorithm-SOC Co-design for Aerial Robots

Aerial autonomous machines (Drones) has a plethora of promising applications and use cases. While the popularity of these autonomous machines continues to grow, there are many challenges, such as endurance and agility, that could hinder the practical deployment of these machines. The closed-loop control frequency must be high to achieve high agility. However, given the resource-constrained nature of the aerial robot, achieving high control loop frequency is hugely challenging and requires careful co-design of algorithm and onboard computer. Such an effort requires infrastructures that bridge various domains, namely robotics, machine learning, and system architecture design. To that end, we present AutoSoC, a framework for co-designing algorithms as well as hardware accelerator systems for end-to-end learning-based aerial autonomous machines. We demonstrate the efficacy of the framework by training an obstacle avoidance algorithm for aerial robots to navigate in a densely cluttered environment. For the best performing algorithm, our framework generates various accelerator design candidates with varying performance, area, and power consumption. The framework also runs the ASIC flow of place and route and generates a layout of the floor-planed accelerator, which can be used to tape-out the final hardware chip.

[1]  George Suciu,et al.  3D Modeling Using Parrot Bebop 2 FPV , 2018, 2018 IEEE 16th International Conference on Embedded and Ubiquitous Computing (EUC).

[2]  Song Han,et al.  EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[3]  Wojciech Zaremba,et al.  Domain randomization for transferring deep neural networks from simulation to the real world , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Nils J. Nilsson,et al.  A Formal Basis for the Heuristic Determination of Minimum Cost Paths , 1968, IEEE Trans. Syst. Sci. Cybern..

[5]  B. Faverjon,et al.  Probabilistic Roadmaps for Path Planning in High-Dimensional Con(cid:12)guration Spaces , 1996 .

[6]  J. How,et al.  Adaptive Flight Control Experiments using RAVEN , 2008 .

[7]  Sergey Levine,et al.  QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation , 2018, CoRL.

[8]  Robert J. Wood,et al.  Progress on "Pico" Air Vehicles , 2011, ISRR.

[9]  Wolfram Burgard,et al.  OctoMap: an efficient probabilistic 3D mapping framework based on octrees , 2013, Autonomous Robots.

[10]  Angelo Cangelosi,et al.  Toward End-to-End Control for UAV Autonomous Landing via Deep Reinforcement Learning , 2018, 2018 International Conference on Unmanned Aircraft Systems (ICUAS).

[11]  Ninghui Sun,et al.  DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning , 2014, ASPLOS.

[12]  Gu-Yeon Wei,et al.  Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).

[13]  Chao Yan,et al.  Towards Real-Time Path Planning through Deep Reinforcement Learning for a UAV in Dynamic Environments , 2020, J. Intell. Robotic Syst..

[14]  Sergey Levine,et al.  (CAD)$^2$RL: Real Single-Image Flight without a Single Real Image , 2016, Robotics: Science and Systems.

[15]  Paul M Ness,et al.  Drone transportation of blood products , 2017, Transfusion.

[16]  Vijay Kumar,et al.  The GRASP Multiple Micro-UAV Testbed , 2010, IEEE Robotics & Automation Magazine.

[17]  William J. Dally,et al.  MAGNet: A Modular Accelerator Generator for Neural Networks , 2019, 2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD).

[18]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.

[19]  Martial Hebert,et al.  Learning monocular reactive UAV control in cluttered natural environments , 2012, 2013 IEEE International Conference on Robotics and Automation.

[20]  James Sean Humbert,et al.  Implementation of wide-field integration of optic flow for autonomous quadrotor navigation , 2009, Auton. Robots.

[21]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[22]  Luca Benini,et al.  Ultra Low Power Deep-Learning-powered Autonomous Nano Drones , 2018, ArXiv.

[23]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[24]  Joel Emer,et al.  Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks , 2016, CARN.

[25]  Stefania Matteoli,et al.  Smart farming: Opportunities, challenges and technology enablers , 2018, 2018 IoT Vertical and Topical Summit on Agriculture - Tuscany (IOT Tuscany).

[26]  Zhu Han,et al.  Real-Time Profiling of Fine-Grained Air Quality Index Distribution Using UAV Sensing , 2017, IEEE Internet of Things Journal.

[27]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[28]  Yosoon Choi,et al.  Reviews of unmanned aerial vehicle (drone) technology trends and its applications in the mining industry , 2016 .

[29]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[30]  Dawn Xiaodong Song,et al.  Assessing Generalization in Deep Reinforcement Learning , 2018, ArXiv.

[31]  Carlos R. del-Blanco,et al.  DroNet: Learning to Fly by Driving , 2018, IEEE Robotics and Automation Letters.

[32]  Markus Waibel,et al.  Drone shows: Creative potential and best practices , 2017 .

[33]  David Janz,et al.  Learning to Drive in a Day , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[34]  Arthur Holland Michel Amazon ’ s Drone Patents , 2017 .

[35]  Abhinav Gupta,et al.  Learning to fly by crashing , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[36]  Wenzhi Cui,et al.  MAVBench: Micro Aerial Vehicle Benchmarking , 2018, 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).

[37]  Da-Yu Kao,et al.  Drone Forensic Investigation: DJI Spark Drone as A Case Study , 2019, KES.

[38]  Hirohiko Suwa,et al.  An Emergency Medical Communications System by Low Altitude Platform at the Early Stages of a Natural Disaster in Indonesia , 2012, Journal of Medical Systems.

[39]  Pradeep Dubey,et al.  SCALEDEEP: A scalable compute architecture for learning and evaluating deep networks , 2017, 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA).

[40]  Rafael Fierro,et al.  Agile Load Transportation : Safe and Efficient Load Manipulation with Aerial Robots , 2012, IEEE Robotics & Automation Magazine.

[41]  Sergei Lupashin,et al.  A platform for aerial robotics research and demonstration: The Flying Machine Arena , 2014 .

[42]  Vijay Kumar,et al.  Fast, autonomous flight in GPS‐denied and cluttered environments , 2017, J. Field Robotics.

[43]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[44]  Hilbert J. Kappen,et al.  Efficient Optical Flow and Stereo Vision for Velocity Estimation and Obstacle Avoidance on an Autonomous Pocket Drone , 2016, IEEE Robotics and Automation Letters.

[45]  Luca Benini,et al.  A 64-mW DNN-Based Visual Navigation Engine for Autonomous Nano-Drones , 2018, IEEE Internet of Things Journal.

[46]  Nikolai Smolyanskiy,et al.  Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[47]  Yuan Yu,et al.  TensorFlow: A system for large-scale machine learning , 2016, OSDI.

[48]  Aleksandra Faust,et al.  Air Learning: An AI Research Platform for Algorithm-Hardware Benchmarking of Autonomous Aerial Robots , 2019, ArXiv.

[49]  S. LaValle Rapidly-exploring random trees : a new tool for path planning , 1998 .

[50]  Aleksandra Faust,et al.  Air Learning: a deep reinforcement learning gym for autonomous aerial robot visual navigation , 2021, Mach. Learn..