暂无分享,去创建一个
Joseph J. Lim | Zhengyu Yang | Youngwoon Lee | Edward S. Hu | Alex Yin | Zhengyu Yang | Youngwoon Lee | E. Hu | A. Yin | Alexander Yin
[1] Silvio Savarese,et al. Neural Task Programming: Learning to Generalize Across Hierarchical Tasks , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[2] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[3] Silvio Savarese,et al. ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation , 2018, CoRL.
[4] Nando de Freitas,et al. Reinforcement and Imitation Learning for Diverse Visuomotor Skills , 2018, Robotics: Science and Systems.
[5] Pieter Abbeel,et al. DoorGym: A Scalable Door Opening Environment And Baseline Agent , 2019, ArXiv.
[6] Hyeonwoo Noh,et al. Neural Program Synthesis from Diverse Demonstration Videos , 2018, ICML.
[7] Yichen Wei,et al. Relation Networks for Object Detection , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[8] Dan Klein,et al. Modular Multitask Reinforcement Learning with Policy Sketches , 2016, ICML.
[9] Antonio Torralba,et al. Parsing IKEA Objects: Fine Pose Estimation , 2013, 2013 IEEE International Conference on Computer Vision.
[10] Joseph J. Lim,et al. Composing Complex Skills by Learning Transition Policies , 2018, ICLR.
[11] Razvan Pascanu,et al. Deep reinforcement learning with relational inductive biases , 2018, ICLR.
[12] Yuval Tassa,et al. DeepMind Control Suite , 2018, ArXiv.
[13] Sergey Levine,et al. (CAD)$^2$RL: Real Single-Image Flight without a Single Real Image , 2016, Robotics: Science and Systems.
[14] Sergey Levine,et al. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning , 2019, CoRL.
[15] Wojciech Zaremba,et al. Domain randomization for transferring deep neural networks from simulation to the real world , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[16] Marcin Andrychowicz,et al. One-Shot Imitation Learning , 2017, NIPS.
[17] Marc G. Bellemare,et al. The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..
[18] Ashish Kapoor,et al. AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles , 2017, FSR.
[19] Joseph J. Lim,et al. To Follow or not to Follow: Selective Imitation Learning from Observations , 2019, CoRL.
[20] Andrew J. Davison,et al. RLBench: The Robot Learning Benchmark & Learning Environment , 2019, IEEE Robotics and Automation Letters.
[21] Ruslan Salakhutdinov,et al. Gated-Attention Architectures for Task-Oriented Language Grounding , 2017, AAAI.
[22] Matthew Botvinick,et al. MONet: Unsupervised Scene Decomposition and Representation , 2019, ArXiv.
[23] Balaraman Ravindran,et al. EPOpt: Learning Robust Neural Network Policies Using Model Ensembles , 2016, ICLR.
[24] Germán Ros,et al. CARLA: An Open Urban Driving Simulator , 2017, CoRL.
[25] OpenAI. Learning Dexterous In-Hand Manipulation. , 2018 .
[26] Ali Farhadi,et al. AI2-THOR: An Interactive 3D Environment for Visual AI , 2017, ArXiv.
[27] Tom Schaul,et al. StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.
[28] Doina Precup,et al. Between MDPs and Semi-MDPs: A Framework for Temporal Abstraction in Reinforcement Learning , 1999, Artif. Intell..
[29] Marwan Mattar,et al. Unity: A General Platform for Intelligent Agents , 2018, ArXiv.
[30] Sergey Levine,et al. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations , 2017, Robotics: Science and Systems.
[31] Jürgen Schmidhuber,et al. Neural Expectation Maximization , 2017, NIPS.
[32] Sergey Levine,et al. Sim2Real View Invariant Visual Servoing by Recurrent Control , 2017, ArXiv.
[33] Siddhartha S. Srinivasa,et al. DART: Dynamic Animation and Robotics Toolkit , 2018, J. Open Source Softw..
[34] Yevgen Chebotar,et al. Closing the Sim-to-Real Loop: Adapting Simulation Randomization with Real World Experience , 2018, 2019 International Conference on Robotics and Automation (ICRA).
[35] Anoop Cherian,et al. Human Action Forecasting by Learning Task Grammars , 2017, ArXiv.
[36] Richard S. Sutton,et al. Dyna, an integrated architecture for learning, planning, and reacting , 1990, SGAR.
[37] Alexandros Karatzoglou,et al. RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising , 2018, ArXiv.
[38] Sanja Fidler,et al. VirtualHome: Simulating Household Activities Via Programs , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[39] Inman Harvey,et al. Noise and the Reality Gap: The Use of Simulation in Evolutionary Robotics , 1995, ECAL.
[40] Erwin Coumans,et al. Bullet physics simulation , 2015, SIGGRAPH Courses.
[41] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[42] Sergey Levine,et al. Divide-and-Conquer Reinforcement Learning , 2017, ICLR.
[43] Silvio Savarese,et al. SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark , 2018, CoRL.
[44] Honglak Lee,et al. Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning , 2017, ICML.
[45] Jakub W. Pachocki,et al. Learning dexterous in-hand manipulation , 2018, Int. J. Robotics Res..
[46] Marcin Andrychowicz,et al. Hindsight Experience Replay , 2017, NIPS.
[47] Wojciech Jaskowski,et al. ViZDoom: A Doom-based AI research platform for visual reinforcement learning , 2016, 2016 IEEE Conference on Computational Intelligence and Games (CIG).