暂无分享,去创建一个
Alois Knoll | Kai Huang | Zhenshan Bing | F. O. Morin | Matthias Brucker | Fabrice O. Morin | Kai Huang | Zhenshan Bing | Alois Knoll | Rui Li | Xiaojie Su | Matthias Brucker
[1] Pierre-Yves Oudeyer,et al. Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration , 2018, ICLR.
[2] Ran Duan,et al. A scaling algorithm for maximum weight matching in bipartite graphs , 2012, SODA.
[3] Salima Hassas,et al. A survey on intrinsic motivation in reinforcement learning , 2019, ArXiv.
[4] Edsger W. Dijkstra,et al. A note on two problems in connexion with graphs , 1959, Numerische Mathematik.
[5] Sergey Levine,et al. Learning Actionable Representations with Goal-Conditioned Policies , 2018, ICLR.
[6] Marcin Andrychowicz,et al. Hindsight Experience Replay , 2017, NIPS.
[7] Sergey Levine,et al. Recall Traces: Backtracking Models for Efficient Reinforcement Learning , 2018, ICLR.
[8] Nuttapong Chentanez,et al. Intrinsically Motivated Reinforcement Learning , 2004, NIPS.
[9] Pieter Abbeel,et al. Reverse Curriculum Generation for Reinforcement Learning , 2017, CoRL.
[10] Ludger Riischendorf. The Wasserstein distance and approximation theorems , 1985 .
[11] Pierre-Yves Oudeyer,et al. Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning , 2017, J. Mach. Learn. Res..
[12] Lei Han,et al. Curriculum-guided Hindsight Experience Replay , 2019, NeurIPS.
[13] Ludger Rüschendorf,et al. The Wasserstein distance and approximation theorems , 1985 .
[14] Pierre-Yves Oudeyer,et al. CURIOUS: Intrinsically Motivated Modular Multi-Goal Reinforcement Learning , 2018, ICML.
[15] Tom Schaul,et al. Prioritized Experience Replay , 2015, ICLR.
[16] Ilya Kostrikov,et al. Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play , 2017, ICLR.
[17] Sergey Levine,et al. Path integral guided policy search , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[18] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[19] Martin A. Riedmiller,et al. Learning by Playing - Solving Sparse Reward Tasks from Scratch , 2018, ICML.
[20] Stefan Schaal,et al. 2008 Special Issue: Reinforcement learning of motor skills with policy gradients , 2008 .
[21] Allan Jabri,et al. Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control , 2018, ICML.
[22] Volker Tresp,et al. Curiosity-Driven Experience Prioritization via Density Estimation , 2018, ArXiv.
[23] Volker Tresp,et al. Energy-Based Hindsight Experience Prioritization , 2018, CoRL.
[24] Marcin Andrychowicz,et al. Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research , 2018, ArXiv.
[25] Pieter Abbeel,et al. Automatic Goal Generation for Reinforcement Learning Agents , 2017, ICML.
[26] Sergey Levine,et al. Search on the Replay Buffer: Bridging Planning and Reinforcement Learning , 2019, NeurIPS.
[27] Rui Zhao,et al. Maximum Entropy-Regularized Multi-Goal Reinforcement Learning , 2019, ICML.
[28] Ben Tse,et al. Autonomous Inverted Helicopter Flight via Reinforcement Learning , 2004, ISER.
[29] Yuandong Tian,et al. Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees , 2018, ICLR.
[30] Stefan Wermter,et al. Curriculum goal masking for continuous deep reinforcement learning , 2018, 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob).
[31] Kavosh Asadi,et al. Lipschitz Continuity in Model-based Reinforcement Learning , 2018, ICML.
[32] Yuan Zhou,et al. Exploration via Hindsight Goal Generation , 2019, NeurIPS.
[33] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[34] Alexei A. Efros,et al. Curiosity-Driven Exploration by Self-Supervised Prediction , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).