Learning to Seek: Autonomous Source Seeking on a Nano Drone Microcontroller with Deep Reinforcement Learning
暂无分享,去创建一个
Guido C. H. E. de Croon | Aleksandra Faust | Srivatsan Krishnan | Bardienus Pieter Duisterhof | Vijay Janapa Reddi | William Fu | Jonathan J. Cruz | Colby R. Banbury
[1] Jian Huang,et al. Odor source localization algorithms on mobile robots: A review and future outlook , 2019, Robotics Auton. Syst..
[2] Achim J. Lilienthal,et al. Smelling Nano Aerial Vehicle for Gas Source Localization and Mapping , 2019, Sensors.
[3] Aleksandra Faust,et al. Air Learning: An AI Research Platform for Algorithm-Hardware Benchmarking of Autonomous Aerial Robots , 2019, ArXiv.
[4] Anthony G. Francis,et al. Evolving Rewards to Automate Reinforcement Learning , 2019, ArXiv.
[5] Luca Benini,et al. A 64-mW DNN-Based Visual Navigation Engine for Autonomous Nano-Drones , 2018, IEEE Internet of Things Journal.
[6] Ashish Kapoor,et al. AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles , 2017, FSR.
[7] Lydia Tapia,et al. PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-Based Planning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[8] Azer Bestavros,et al. Reinforcement Learning for UAV Attitude Control , 2018, ACM Trans. Cyber Phys. Syst..
[9] Sergey Levine,et al. Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[10] Aiguo Song,et al. Small Teleoperated Robot for Nuclear Radiation and Chemical Leak Detection , 2012 .
[11] Luca Benini,et al. Ultra Low Power Deep-Learning-powered Autonomous Nano Drones , 2018, ArXiv.
[12] Dario Izzo,et al. Evolutionary robotics approach to odor source localization , 2013, Neurocomputing.
[13] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.
[14] Rui Zou,et al. Particle Swarm Optimization-Based Source Seeking , 2015, IEEE Transactions on Automation Science and Engineering.
[15] Steven M. LaValle,et al. I-Bug: An intensity-based bug algorithm , 2009, 2009 IEEE International Conference on Robotics and Automation.
[16] James Evans,et al. Optimization algorithms for networks and graphs , 1992 .
[17] Vincent Vanhoucke,et al. Improving the speed of neural networks on CPUs , 2011 .
[18] Gaurav S. Sukhatme,et al. Sim-to-(Multi)-Real: Transfer of Low-Level Robust Control Policies to Multiple Quadrotors , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[19] Henry Zhu,et al. Soft Actor-Critic Algorithms and Applications , 2018, ArXiv.
[20] Sonia Martínez,et al. Stochastic Source Seeking for Mobile Robots in Obstacle Environments Via the SPSA Method , 2019, IEEE Transactions on Automatic Control.
[21] Roland Siegwart,et al. Control of a Quadrotor With Reinforcement Learning , 2017, IEEE Robotics and Automation Letters.
[22] Sergey Levine,et al. Low-Level Control of a Quadrotor With Deep Model-Based Reinforcement Learning , 2019, IEEE Robotics and Automation Letters.
[23] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[24] Guido C. H. E. de Croon,et al. A Comparative Study of Bug Algorithms for Robot Navigation , 2018, Robotics Auton. Syst..
[25] Sergey Levine,et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation , 2018, CoRL.