暂无分享,去创建一个
[1] Wojciech Zaremba,et al. Domain Randomization and Generative Models for Robotic Grasping , 2017, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[2] Jan Peters,et al. Local Gaussian process regression for real-time model-based robot control , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[3] Stefan Schaal,et al. 2008 Special Issue: Reinforcement learning of motor skills with policy gradients , 2008 .
[4] Yevgen Chebotar,et al. Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience , 2018, 2019 International Conference on Robotics and Automation (ICRA).
[5] Fakhrul Alam,et al. Gaussian Process Model Predictive Control of unmanned quadrotors , 2016, 2016 2nd International Conference on Control, Automation and Robotics (ICCAR).
[6] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[7] Marcin Andrychowicz,et al. Sim-to-Real Transfer of Robotic Control with Dynamics Randomization , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[8] J. Kocijan,et al. Gaussian process model based predictive control , 2004, Proceedings of the 2004 American Control Conference.
[9] Stephen A. Billings,et al. Non-linear system identification using neural networks , 1990 .
[10] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[11] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[12] Srikanth Saripalli,et al. An Iterative LQR Controller for Off-Road and On-Road Vehicles using a Neural Network Dynamics Model , 2020, 2020 IEEE Intelligent Vehicles Symposium (IV).
[13] Jan Peters,et al. Reinforcement Learning to Adjust Robot Movements to New Situations , 2010, IJCAI.
[14] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[15] Jan Peters,et al. Noname manuscript No. (will be inserted by the editor) Policy Search for Motor Primitives in Robotics , 2022 .
[16] Wojciech Zaremba,et al. Domain randomization for transferring deep neural networks from simulation to the real world , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[17] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[18] Richard D. Braatz,et al. On the "Identification and control of dynamical systems using neural networks" , 1997, IEEE Trans. Neural Networks.
[19] Darwin G. Caldwell,et al. Robot motor skill coordination with EM-based Reinforcement Learning , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[20] Marc Peter Deisenroth,et al. Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control , 2017, AISTATS.
[21] Dieter Fox,et al. Gaussian Processes and Reinforcement Learning for Identification and Control of an Autonomous Blimp , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.
[22] Carl E. Rasmussen,et al. Gaussian Processes for Data-Efficient Learning in Robotics and Control , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[23] Sergey Levine,et al. Learning to Walk via Deep Reinforcement Learning , 2018, Robotics: Science and Systems.
[24] Slobodan Ilic,et al. DeceptionNet: Network-Driven Domain Randomization , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[25] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[26] Sergey Levine,et al. (CAD)$^2$RL: Real Single-Image Flight without a Single Real Image , 2016, Robotics: Science and Systems.
[27] Marcin Andrychowicz,et al. Solving Rubik's Cube with a Robot Hand , 2019, ArXiv.