Sim-to-Real Transfer with Incremental Environment Complexity for Reinforcement Learning of Depth-Based Robot Navigation
暂无分享,去创建一个
Thomas Chaffre | Julien Moras | Adrien Chan-Hon-Tong | Julien Marzat | J. Marzat | Adrien Chan-Hon-Tong | J. Moras | Thomas Chaffre
[1] Erik Derner,et al. Vision-based Navigation Using Deep Reinforcement Learning , 2019, 2019 European Conference on Mobile Robots (ECMR).
[2] Henrik I. Christensen,et al. How to pick the domain randomization parameters for sim-to-real transfer of reinforcement learning policies? , 2019, ArXiv.
[3] Manmohan Krishna Chandraker,et al. Learning To Simulate , 2018, ICLR.
[4] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[5] Alan Fern,et al. Explainable Reinforcement Learning via Reward Decomposition , 2019 .
[6] Wojciech Jaskowski,et al. ViZDoom: A Doom-based AI research platform for visual reinforcement learning , 2016, 2016 IEEE Conference on Computational Intelligence and Games (CIG).
[7] G. DeJong,et al. Theory and Application of Reward Shaping in Reinforcement Learning , 2004 .
[8] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[9] Wojciech Zaremba,et al. Domain randomization for transferring deep neural networks from simulation to the real world , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[10] Jayshree Ghorpade,et al. GPGPU Processing in CUDA Architecture , 2012, ArXiv.
[11] Alejandro Hernández Cordero,et al. Extending the OpenAI Gym for robotics: a toolkit for reinforcement learning using ROS and Gazebo , 2016, ArXiv.
[12] Sanja Fidler,et al. Meta-Sim: Learning to Generate Synthetic Datasets , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[13] Sergey Levine,et al. (CAD)$^2$RL: Real Single-Image Flight without a Single Real Image , 2016, Robotics: Science and Systems.
[14] Andrew Howard,et al. Design and use paradigms for Gazebo, an open-source multi-robot simulator , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).
[15] Lydia E. Kavraki,et al. The Open Motion Planning Library , 2012, IEEE Robotics & Automation Magazine.
[16] Ming Liu,et al. Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[17] Gabriel Dulac-Arnold,et al. Challenges of Real-World Reinforcement Learning , 2019, ArXiv.
[18] Juan D. Tardós,et al. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.
[19] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[20] Pascual Campoy Cervera,et al. A Review of Deep Learning Methods and Applications for Unmanned Aerial Vehicles , 2017, J. Sensors.
[21] Guillaume Lample,et al. Playing FPS Games with Deep Reinforcement Learning , 2016, AAAI.
[22] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[23] Xi Chen,et al. Evolution Strategies as a Scalable Alternative to Reinforcement Learning , 2017, ArXiv.
[24] Hriday Bavle,et al. A Fully-Autonomous Aerial Robot for Search and Rescue Applications in Indoor Environments using Learning-Based Techniques , 2018, J. Intell. Robotic Syst..
[25] Aleksandra Faust,et al. Learning Navigation Behaviors End-to-End With AutoRL , 2018, IEEE Robotics and Automation Letters.
[26] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[27] Jörg Stückler,et al. Large-scale direct SLAM with stereo cameras , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[28] Wojciech Zaremba,et al. Learning to Execute , 2014, ArXiv.
[29] Nasser Mozayani,et al. A new Potential-Based Reward Shaping for Reinforcement Learning Agent , 2019, 2023 IEEE 13th Annual Computing and Communication Workshop and Conference (CCWC).
[30] Roland Siegwart,et al. Model Predictive Control for Trajectory Tracking of Unmanned Aerial Vehicles Using Robot Operating System , 2017 .
[31] Wolfram Burgard,et al. OctoMap : A Probabilistic , Flexible , and Compact 3 D Map Representation for Robotic Systems , 2010 .
[32] Sergey Levine,et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.
[33] Vladlen Koltun,et al. Benchmarking Classic and Learned Navigation in Complex 3D Environments , 2019, ArXiv.
[34] Qiaozhi Wang,et al. OffWorld Gym: open-access physical robotics environment for real-world reinforcement learning benchmark and research , 2019, ArXiv.
[35] Marcin Andrychowicz,et al. Solving Rubik's Cube with a Robot Hand , 2019, ArXiv.
[36] John J. Leonard,et al. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age , 2016, IEEE Transactions on Robotics.
[37] Razvan Pascanu,et al. Sim-to-Real Robot Learning from Pixels with Progressive Nets , 2016, CoRL.
[38] Sen Wang,et al. Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning , 2017, RSS 2017.
[39] Demis Hassabis,et al. Mastering the game of Go without human knowledge , 2017, Nature.
[40] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[41] J. Elman. Learning and development in neural networks: the importance of starting small , 1993, Cognition.
[42] Yishay Mansour,et al. Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.
[43] Ole Ravn,et al. Receding horizon approach to path following mobile robot in the presence of velocity constraints , 2001, 2001 European Control Conference (ECC).