暂无分享,去创建一个
Sergey Levine | Karol Hausman | Zhanpeng He | Chelsea Finn | Tianhe Yu | Deirdre Quillen | Ryan Julian
[1] Siddhartha S. Srinivasa,et al. The YCB object and Model set: Towards common benchmarks for manipulation research , 2015, 2015 International Conference on Advanced Robotics (ICAR).
[2] Wei Gao,et al. kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation , 2019, ISRR.
[3] Sergey Levine,et al. Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables , 2019, ICML.
[4] Jakub W. Pachocki,et al. Learning dexterous in-hand manipulation , 2018, Int. J. Robotics Res..
[5] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[6] Ali Farhadi,et al. AI2-THOR: An Interactive 3D Environment for Visual AI , 2017, ArXiv.
[7] Ruslan Salakhutdinov,et al. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.
[8] Kuan-Ting Yu,et al. More than a million ways to be pushed. A high-fidelity experimental dataset of planar pushing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[9] Silvio Savarese,et al. ROBOTURK: A Crowdsourcing Platform for Robotic Skill Learning through Imitation , 2018, CoRL.
[10] Pieter Abbeel,et al. Meta-Learning with Temporal Convolutions , 2017, ArXiv.
[11] Danna Zhou,et al. d. , 1840, Microbial pathogenesis.
[12] Katja Hofmann,et al. Meta Reinforcement Learning with Latent Variable Gaussian Processes , 2018, UAI.
[13] Simon Brodeur,et al. HoME: a Household Multimodal Environment , 2017, ICLR.
[14] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[15] Thomas A. Funkhouser,et al. MINOS: Multimodal Indoor Simulator for Navigation in Complex Environments , 2017, ArXiv.
[16] Balaraman Ravindran,et al. Online Multi-Task Learning Using Active Sampling , 2017, ICLR.
[17] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[18] Sergey Levine,et al. Visual Reinforcement Learning with Imagined Goals , 2018, NeurIPS.
[19] Silvio Savarese,et al. SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark , 2018, CoRL.
[20] Gaurav S. Sukhatme,et al. Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning , 2017, ICML.
[21] Sergey Levine,et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.
[22] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[23] Shane Legg,et al. IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures , 2018, ICML.
[24] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[25] John Schulman,et al. Gotta Learn Fast: A New Benchmark for Generalization in RL , 2018, ArXiv.
[26] Sergey Levine,et al. Unsupervised Learning for Physical Interaction through Video Prediction , 2016, NIPS.
[27] Charles C. Kemp,et al. A list of household objects for robotic retrieval prioritized by people with ALS , 2008, 2009 IEEE International Conference on Rehabilitation Robotics.
[28] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[29] Honglak Lee,et al. Deep learning for detecting robotic grasps , 2013, Int. J. Robotics Res..
[30] Abhinav Gupta,et al. Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias , 2018, NeurIPS.
[31] Jeannette Bohg,et al. Leveraging big data for grasp planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).
[32] Jitendra Malik,et al. Gibson Env: Real-World Perception for Embodied Agents , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[33] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[34] Joseph J. Lim,et al. IKEA Furniture Assembly Environment for Long-Horizon Complex Manipulation Tasks , 2019, 2021 IEEE International Conference on Robotics and Automation (ICRA).
[35] Gaurav S. Sukhatme,et al. BiGS: BioTac Grasp Stability Dataset , 2016 .
[36] Abhinav Gupta,et al. Multiple Interactions Made Easy (MIME): Large Scale Demonstrations Data for Imitation , 2018, CoRL.
[37] Peter L. Bartlett,et al. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.
[38] Germán Ros,et al. CARLA: An Open Urban Driving Simulator , 2017, CoRL.
[39] OpenAI. Learning Dexterous In-Hand Manipulation. , 2018 .
[40] Christos Dimitrakakis,et al. TORCS, The Open Racing Car Simulator , 2005 .
[41] Jitendra Malik,et al. Habitat: A Platform for Embodied AI Research , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[42] Sergey Levine,et al. Learning to Adapt: Meta-Learning for Model-Based Control , 2018, ArXiv.
[43] Pieter Abbeel,et al. A Simple Neural Attentive Meta-Learner , 2017, ICLR.
[44] Rüdiger Dillmann,et al. The KIT object models database: An object model database for object recognition, localization and manipulation in service robotics , 2012, Int. J. Robotics Res..
[45] Taehoon Kim,et al. Quantifying Generalization in Reinforcement Learning , 2018, ICML.
[46] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[47] Razvan Pascanu,et al. Ray Interference: a Source of Plateaus in Deep Reinforcement Learning , 2019, ArXiv.
[48] Razvan Pascanu,et al. Policy Distillation , 2015, ICLR.
[49] Vladlen Koltun,et al. Playing for Benchmarks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[50] Zeb Kurth-Nelson,et al. Been There, Done That: Meta-Learning with Episodic Recall , 2018, ICML.
[51] Oliver Brock,et al. Analysis and Observations From the First Amazon Picking Challenge , 2016, IEEE Transactions on Automation Science and Engineering.
[52] Zeb Kurth-Nelson,et al. Learning to reinforcement learn , 2016, CogSci.
[53] Sergey Levine,et al. Learning to Adapt in Dynamic, Real-World Environments through Meta-Reinforcement Learning , 2018, ICLR.
[54] Marlos C. Machado,et al. Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents , 2017, J. Artif. Intell. Res..
[55] Yuval Tassa,et al. DeepMind Control Suite , 2018, ArXiv.
[56] Oliver Kroemer,et al. Learning to select and generalize striking movements in robot table tennis , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.
[57] Matei T. Ciocarlie,et al. The Columbia grasp database , 2009, 2009 IEEE International Conference on Robotics and Automation.
[58] Andrew J. Davison,et al. RLBench: The Robot Learning Benchmark & Learning Environment , 2019, IEEE Robotics and Automation Letters.
[59] Karol Hausman,et al. Learning an Embedding Space for Transferable Robot Skills , 2018, ICLR.
[60] Sergey Levine,et al. Online Meta-Learning , 2019, ICML.
[61] Siddhartha S. Srinivasa,et al. Benchmarking in Manipulation Research: Using the Yale-CMU-Berkeley Object and Model Set , 2015, IEEE Robotics & Automation Magazine.
[62] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[63] Tamim Asfour,et al. ProMP: Proximal Meta-Policy Search , 2018, ICLR.
[64] Wojciech Czarnecki,et al. Multi-task Deep Reinforcement Learning with PopArt , 2018, AAAI.