dm_control: Software and Tasks for Continuous Control
暂无分享,去创建一个
Yuval Tassa | Steven Bohez | Tom Erez | Nicolas Heess | Timothy Lillicrap | Siqi Liu | Alistair Muldal | Saran Tunyasuvunakool | Yotam Doron | Josh Merel | T. Lillicrap | N. Heess | T. Erez | Yuval Tassa | J. Merel | S. Tunyasuvunakool | Siqi Liu | Alistair Muldal | Yotam Doron | Steven Bohez | Tom Erez
[1] Joseph J. Lim,et al. IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks , 2019, 2021 IEEE International Conference on Robotics and Automation (ICRA).
[2] Yuval Tassa,et al. Catch & Carry , 2020, ACM Trans. Graph..
[3] H. Francis Song,et al. A Distributional View on Multi-Objective Policy Optimization , 2020, ICML.
[4] Yuval Tassa,et al. Deep neuroethology of a virtual rodent , 2019, ICLR.
[5] Andrew J. Davison,et al. RLBench: The Robot Learning Benchmark & Learning Environment , 2019, IEEE Robotics and Automation Letters.
[6] S. Levine,et al. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning , 2019, CoRL.
[7] Tom Eccles,et al. Reinforcement Learning Agents acquire Flocking and Symbiotic Behaviour in Simulated Ecosystems , 2019, Artificial Life Conference Proceedings.
[8] Guy Lever,et al. The Body is Not a Given: Joint Agent Policy Learning and Morphology Evolution , 2019, AAMAS.
[9] Guy Lever,et al. Emergent Coordination Through Competition , 2019, ICLR.
[10] Murilo F. Martins,et al. Simultaneously Learning Vision and Feature-based Control Policies for Real-world Ball-in-a-Cup , 2019, Robotics: Science and Systems.
[11] Yee Whye Teh,et al. Neural probabilistic motor primitives for humanoid control , 2018, ICLR.
[12] Nicolas Heess,et al. Hierarchical visuomotor control of humanoids , 2018, ICLR.
[13] Silvio Savarese,et al. SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark , 2018, CoRL.
[14] Raia Hadsell,et al. Success at any cost: value constrained model-free continuous control , 2018 .
[15] Sergio Gomez Colmenarejo,et al. One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL , 2018, ArXiv.
[16] Martin A. Riedmiller,et al. Learning by Playing - Solving Sparse Reward Tasks from Scratch , 2018, ICML.
[17] Nando de Freitas,et al. Reinforcement and Imitation Learning for Diverse Visuomotor Skills , 2018, Robotics: Science and Systems.
[18] Yuval Tassa,et al. Maximum a Posteriori Policy Optimisation , 2018, ICLR.
[19] Misha Denil,et al. Learning Awareness Models , 2018, ICLR.
[20] Yuval Tassa,et al. DeepMind Control Suite , 2018, ArXiv.
[21] Philip Bachman,et al. Deep Reinforcement Learning that Matters , 2017, AAAI.
[22] Yuval Tassa,et al. Emergence of Locomotion Behaviours in Rich Environments , 2017, ArXiv.
[23] Yuval Tassa,et al. Learning human behaviors from motion capture by adversarial imitation , 2017, ArXiv.
[24] Alexandre Campeau-Lecours,et al. Kinova Modular Robot Arms for Service Robotics Applications , 2017, Int. J. Robotics Appl. Technol..
[25] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[26] Pieter Abbeel,et al. Benchmarking Deep Reinforcement Learning for Continuous Control , 2016, ICML.
[27] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[28] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[29] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[30] Yuval Tassa,et al. Learning Continuous Control Policies by Stochastic Value Gradients , 2015, NIPS.
[31] Yuval Tassa,et al. Simulation tools for model-based robotics: Comparison of Bullet, Havok, MuJoCo, ODE and PhysX , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).
[32] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[33] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[34] Marc G. Bellemare,et al. The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..
[35] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[36] Yuval Tassa,et al. Synthesis and stabilization of complex behaviors through online trajectory optimization , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[37] Yuval Tassa,et al. Stochastic Complementarity for Local Control of Discontinuous Dynamics , 2010, Robotics: Science and Systems.
[38] Pawel Wawrzynski,et al. Real-time reinforcement learning by sequential Actor-Critics and experience replay , 2009, Neural Networks.
[39] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[40] K. Doya,et al. A unifying computational framework for motor control and social interaction. , 2003, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.
[41] Rémi Coulom,et al. Reinforcement Learning Using Neural Networks, with Applications to Motor Control. (Apprentissage par renforcement utilisant des réseaux de neurones, avec des applications au contrôle moteur) , 2002 .
[42] David K. Smith,et al. Dynamic Programming and Optimal Control. Volume 1 , 1996 .
[43] Dimitri P. Bertsekas,et al. Dynamic Programming and Optimal Control, Two Volume Set , 1995 .
[44] Mark W. Spong,et al. The swing up control problem for the Acrobot , 1995 .
[45] Karl Sims,et al. Evolving virtual creatures , 1994, SIGGRAPH.
[46] Richard S. Sutton,et al. Neuronlike adaptive elements that can solve difficult learning control problems , 1983, IEEE Transactions on Systems, Man, and Cybernetics.