Integrating contrastive learning with dynamic models for reinforcement learning from images
暂无分享,去创建一个
[1] Danna Zhou,et al. d. , 1840, Microbial pathogenesis.
[2] Jan Peters,et al. Stable reinforcement learning with autoencoders for tactile and visual data , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[3] Jimmy Ba,et al. Dream to Control: Learning Behaviors by Latent Imagination , 2019, ICLR.
[4] Anind K. Dey,et al. Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.
[5] S. Levine,et al. Learning Invariant Representations for Reinforcement Learning without Reconstruction , 2020, ICLR.
[6] Meng Wei,et al. Robot skill acquisition in assembly process using deep reinforcement learning , 2019, Neurocomputing.
[7] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[8] Alexei A. Efros,et al. Curiosity-Driven Exploration by Self-Supervised Prediction , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[9] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[10] David Filliat,et al. Deep unsupervised state representation learning with robotic priors: a robustness analysis , 2019, 2019 International Joint Conference on Neural Networks (IJCNN).
[11] Yoshua Bengio,et al. Unsupervised State Representation Learning in Atari , 2019, NeurIPS.
[12] S. Varadhan,et al. Asymptotic evaluation of certain Markov process expectations for large time , 1975 .
[13] G. Alagic,et al. #p , 2019, Quantum Inf. Comput..
[14] Honglak Lee,et al. Predictive Information Accelerates Learning in RL , 2020, NeurIPS.
[15] Sergey Levine,et al. EMI: Exploration with Mutual Information , 2018, ICML.
[16] R. Fergus,et al. Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels , 2020, ICLR.
[17] Pieter Abbeel,et al. CURL: Contrastive Unsupervised Representations for Reinforcement Learning , 2020, ICML.
[18] Ali Farhadi,et al. Target-driven visual navigation in indoor scenes using deep reinforcement learning , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[19] Sergey Levine,et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.
[20] Joelle Pineau,et al. Independently Controllable Features , 2017 .
[21] Joelle Pineau,et al. Improving Sample Efficiency in Model-Free Reinforcement Learning from Images , 2019, AAAI.
[22] Hang Su,et al. Neural fuzzy approximation enhanced autonomous tracking control of the wheel-legged robot under uncertain physical interaction , 2020, Neurocomputing.
[23] Jitendra Malik,et al. Learning to Poke by Poking: Experiential Learning of Intuitive Physics , 2016, NIPS.
[24] Sergey Levine,et al. Deep visual foresight for planning robot motion , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[25] Ross B. Girshick,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Oliver Brock,et al. State Representation Learning with Robotic Priors for Partially Observable Environments , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[27] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[28] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[29] Martin A. Riedmiller,et al. Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images , 2015, NIPS.
[30] Xin Zhang,et al. Random curiosity-driven exploration in deep reinforcement learning , 2020, Neurocomputing.
[31] Yoshua Bengio,et al. Mutual Information Neural Estimation , 2018, ICML.
[32] Sergey Levine,et al. Deep spatial autoencoders for visuomotor learning , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[33] Martin A. Riedmiller,et al. Learn to Swing Up and Balance a Real Pole Based on Raw Visual Input Data , 2012, ICONIP.
[34] Oliver Brock,et al. Learning state representations with robotic priors , 2015, Auton. Robots.
[35] Ian S. Fischer,et al. The Conditional Entropy Bottleneck , 2020, Entropy.
[36] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[37] Martin A. Riedmiller,et al. PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations , 2017, ArXiv.
[38] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[39] P. Alam. ‘N’ , 2021, Composites Engineering: An A–Z Guide.
[40] Sergey Levine,et al. SOLAR: Deep Structured Representations for Model-Based Reinforcement Learning , 2018, ICML.
[41] Sergey Levine,et al. Unsupervised Learning for Physical Interaction through Video Prediction , 2016, NIPS.
[42] P. Cincotta,et al. Conditional Entropy , 1999 .
[43] Ruben Villegas,et al. Learning Latent Dynamics for Planning from Pixels , 2018, ICML.