SimGAN: Hybrid Simulator Identification for Domain Adaptation via Adversarial Reinforcement Learning
暂无分享,去创建一个
[1] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[2] Greg Turk,et al. Preparing for the Unknown: Learning a Universal Policy with Online System Identification , 2017, Robotics: Science and Systems.
[3] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[4] Razvan Pascanu,et al. Sim-to-Real Robot Learning from Pixels with Progressive Nets , 2016, CoRL.
[5] Sergey Levine,et al. Sim-To-Real via Sim-To-Sim: Data-Efficient Robotic Grasping via Randomized-To-Canonical Adaptation Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Jie Tan,et al. Learning Agile Robotic Locomotion Skills by Imitating Animals , 2020, RSS 2020.
[7] Jackie Kay,et al. Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer , 2019, ArXiv.
[8] Stefano Ermon,et al. Generative Adversarial Imitation Learning , 2016, NIPS.
[9] Sergey Levine,et al. Learning Robust Rewards with Adversarial Inverse Reinforcement Learning , 2017, ICLR 2017.
[10] Marcin Andrychowicz,et al. Solving Rubik's Cube with a Robot Hand , 2019, ArXiv.
[11] Abhinav Gupta,et al. Robust Adversarial Reinforcement Learning , 2017, ICML.
[12] C. Karen Liu,et al. Sim-to-Real Transfer for Biped Locomotion , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[13] Jan Peters,et al. Bayesian Domain Randomization for Sim-to-Real Transfer , 2020, ArXiv.
[14] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[15] Kostas E. Bekris,et al. Fast Model Identification via Physics Engines for Data-Efficient Policy Search , 2017, IJCAI.
[16] Gentiane Venture,et al. Identifiability and identification of inertial parameters using the underactuated base-link dynamics for legged multibody systems , 2014, Int. J. Robotics Res..
[17] Joonho Lee,et al. Learning agile and dynamic motor skills for legged robots , 2019, Science Robotics.
[18] Chelsea Finn,et al. Rapidly Adaptable Legged Robots via Evolutionary Meta-Learning , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[19] Sergey Levine,et al. (CAD)$^2$RL: Real Single-Image Flight without a Single Real Image , 2016, Robotics: Science and Systems.
[20] Jakub W. Pachocki,et al. Learning dexterous in-hand manipulation , 2018, Int. J. Robotics Res..
[21] M. Gautier,et al. Exciting Trajectories for the Identification of Base Inertial Parameters of Robots , 1991, [1991] Proceedings of the 30th IEEE Conference on Decision and Control.
[22] Kuan-Ting Yu,et al. More than a million ways to be pushed. A high-fidelity experimental dataset of planar pushing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[23] Christopher Joseph Pal,et al. Active Domain Randomization , 2019, CoRL.
[24] Marcin Andrychowicz,et al. Sim-to-Real Transfer of Robotic Control with Dynamics Randomization , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[25] Ruslan Salakhutdinov,et al. Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers , 2020, ArXiv.
[26] Sicun Gao,et al. Provably Efficient Model-based Policy Adaptation , 2020, ICML.
[27] Peter Stone,et al. Stochastic Grounded Action Transformation for Robot Learning in Simulation , 2017, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[28] Byron Boots,et al. Simulation-based design of dynamic controllers for humanoid balancing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[29] Sehoon Ha,et al. Learning Fast Adaptation With Meta Strategy Optimization , 2020, IEEE Robotics and Automation Letters.
[30] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[31] Balaraman Ravindran,et al. EPOpt: Learning Robust Neural Network Policies Using Model Ensembles , 2016, ICLR.
[32] Timothy M. Hospedales,et al. Adversarial Generation of Informative Trajectories for Dynamics System Identification , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[33] Yevgen Chebotar,et al. Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience , 2018, 2019 International Conference on Robotics and Automation (ICRA).
[34] Sergey Levine,et al. Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[35] Manmohan Krishna Chandraker,et al. Learning To Simulate , 2018, ICLR.
[36] Mohi Khansari,et al. RL-CycleGAN: Reinforcement Learning Aware Simulation-to-Real , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Atil Iscen,et al. Sim-to-Real: Learning Agile Locomotion For Quadruped Robots , 2018, Robotics: Science and Systems.
[38] Patrick MacAlpine,et al. Humanoid robots learning to walk faster: from the real world to simulation and back , 2013, AAMAS.
[39] Wojciech Zaremba,et al. Domain randomization for transferring deep neural networks from simulation to the real world , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[40] Sergey Levine,et al. Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning , 2018, ICLR.
[41] Atil Iscen,et al. Data Efficient Reinforcement Learning for Legged Robots , 2019, CoRL.