Representation Matters: Improving Perception and Exploration for Robotics
暂无分享,去创建一个
Martin Riedmiller | Markus Wulfmeier | Roland Hafner | Tim Hertweck | Irina Higgins | Arunkumar Byravan | Malcolm Reynolds | Denis Teplyashin | Tejas Kulkarni | Thomas Lampe | Ankush Gupta
[1] Leslie Pack Kaelbling,et al. Learning to Achieve Goals , 1993, IJCAI.
[2] Alex Graves,et al. Automated Curriculum Learning for Neural Networks , 2017, ICML.
[3] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[4] Sergey Levine,et al. Visual Reinforcement Learning with Imagined Goals , 2018, NeurIPS.
[5] David Pfau,et al. Towards a Definition of Disentangled Representations , 2018, ArXiv.
[6] Raia Hadsell,et al. Disentangled Cumulants Help Successor Representations Transfer to New Tasks , 2019, ArXiv.
[7] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[8] Christopher Burgess,et al. DARLA: Improving Zero-Shot Transfer in Reinforcement Learning , 2017, ICML.
[9] Alexei A. Efros,et al. Curiosity-Driven Exploration by Self-Supervised Prediction , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[10] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[11] Jürgen Schmidhuber,et al. PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem , 2011, Front. Psychol..
[12] Martin A. Riedmiller,et al. Regularized Hierarchical Policies for Compositional Transfer in Robotics , 2019, ArXiv.
[13] Rui Wang,et al. Paired Open-Ended Trailblazer (POET): Endlessly Generating Increasingly Complex and Diverse Learning Environments and Their Solutions , 2019, ArXiv.
[14] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[15] Heikki Mannila,et al. Random projection in dimensionality reduction: applications to image and text data , 2001, KDD '01.
[16] Sergey Levine,et al. Data-Efficient Hierarchical Reinforcement Learning , 2018, NeurIPS.
[17] Yuval Tassa,et al. Emergence of Locomotion Behaviours in Rich Environments , 2017, ArXiv.
[18] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[19] Sergey Levine,et al. Diversity is All You Need: Learning Skills without a Reward Function , 2018, ICLR.
[20] Martin A. Riedmiller,et al. Autonomous reinforcement learning on raw visual input data in a real world application , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).
[21] Pieter Abbeel,et al. Reverse Curriculum Generation for Reinforcement Learning , 2017, CoRL.
[22] Pierre-Yves Oudeyer,et al. Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning , 2017, J. Mach. Learn. Res..
[23] Yuval Tassa,et al. DeepMind Control Suite , 2018, ArXiv.
[24] Chrystopher L. Nehaniv,et al. All Else Being Equal Be Empowered , 2005, ECAL.
[25] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[26] Daan Wierstra,et al. Towards Conceptual Compression , 2016, NIPS.
[27] Yuval Tassa,et al. Relative Entropy Regularized Policy Iteration , 2018, ArXiv.
[28] Joelle Pineau,et al. Independently Controllable Features , 2017 .
[29] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[30] Jeff Donahue,et al. Large Scale Adversarial Representation Learning , 2019, NeurIPS.
[31] Ankush Gupta,et al. Unsupervised Learning of Object Keypoints for Perception and Control , 2019, NeurIPS.
[32] Nando de Freitas,et al. Playing hard exploration games by watching YouTube , 2018, NeurIPS.
[33] Vladlen Koltun,et al. Learning to Act by Predicting the Future , 2016, ICLR.
[34] Matthew Botvinick,et al. MONet: Unsupervised Scene Decomposition and Representation , 2019, ArXiv.
[35] Yee Whye Teh,et al. Distral: Robust multitask reinforcement learning , 2017, NIPS.
[36] Martin A. Riedmiller,et al. Compositional Transfer in Hierarchical Reinforcement Learning , 2019, Robotics: Science and Systems.
[37] Tom Schaul,et al. Reinforcement Learning with Unsupervised Auxiliary Tasks , 2016, ICLR.
[38] R Devon Hjelm,et al. Data-Efficient Reinforcement Learning with Momentum Predictive Representations , 2020, ArXiv.
[39] Martin A. Riedmiller,et al. Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models , 2019, CoRL.
[40] Sergey Levine,et al. Model-Based Reinforcement Learning for Atari , 2019, ICLR.
[41] Sergey Levine,et al. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning , 2019, CoRL.
[42] Sergey Levine,et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation , 2018, CoRL.
[43] Marcin Andrychowicz,et al. Hindsight Experience Replay , 2017, NIPS.
[44] Jackie Kay,et al. Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation , 2019, 2020 IEEE International Conference on Robotics and Automation (ICRA).
[45] Razvan Pascanu,et al. Learning to Navigate in Complex Environments , 2016, ICLR.
[46] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[47] Sergey Levine,et al. Dynamics-Aware Unsupervised Discovery of Skills , 2019, ICLR.
[48] Sergey Levine,et al. Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model , 2019, NeurIPS.
[49] Jost Tobias Springenberg,et al. Simple Sensor Intentions for Exploration , 2020, ArXiv.
[50] Alexander Lerchner,et al. Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs , 2019, ArXiv.
[51] Mohammad Norouzi,et al. Dream to Control: Learning Behaviors by Latent Imagination , 2019, ICLR.
[52] Misha Denil,et al. The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously , 2017, CoRL.
[53] Martin A. Riedmiller,et al. Learning by Playing - Solving Sparse Reward Tasks from Scratch , 2018, ICML.
[54] Eduardo F. Morales,et al. An Introduction to Reinforcement Learning , 2011 .
[55] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[56] Pierre-Yves Oudeyer,et al. Curiosity Driven Exploration of Learned Disentangled Goal Spaces , 2018, CoRL.
[57] Pieter Abbeel,et al. CURL: Contrastive Unsupervised Representations for Reinforcement Learning , 2020, ICML.
[58] Daan Wierstra,et al. Variational Intrinsic Control , 2016, ICLR.
[59] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[60] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[61] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.