Deep spatial autoencoders for visuomotor learning
暂无分享,去创建一个
Sergey Levine | Trevor Darrell | Pieter Abbeel | Chelsea Finn | Xin Yu Tan | Yan Duan | S. Levine | P. Abbeel | Yan Duan | Trevor Darrell | Chelsea Finn | X. Tan
[1] Patrick Rives,et al. A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..
[2] Patrick Rives,et al. A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..
[3] Peter K. Allen,et al. Active, uncalibrated visual servoing , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.
[4] Leemon C. Baird,et al. Residual Algorithms: Reinforcement Learning with Function Approximation , 1995, ICML.
[5] William J. Wilson,et al. Relative end-effector control using Cartesian position based visual servoing , 1996, IEEE Trans. Robotics Autom..
[6] Olac Fuentes,et al. Experimental evaluation of uncalibrated visual servoing for precision manipulation , 1997, Proceedings of International Conference on Robotics and Automation.
[7] Jeff G. Schneider,et al. Covariant policy search , 2003, IJCAI 2003.
[8] Peter Stone,et al. Learning Predictive State Representations , 2003, ICML.
[9] Peter Stone,et al. Policy gradient reinforcement learning for fast quadrupedal locomotion , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.
[10] Emanuel Todorov,et al. Iterative Linear Quadratic Regulator Design for Nonlinear Biological Movement Systems , 2004, ICINCO.
[11] H. Sebastian Seung,et al. Stochastic policy gradient reinforcement learning on a simple 3D biped , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).
[12] Geoffrey E. Hinton,et al. Reinforcement Learning with Factored States and Actions , 2004, J. Mach. Learn. Res..
[13] Florentin Wörgötter,et al. Fast biped walking with a reflexive controller and real-time policy searching , 2005, NIPS.
[14] Honglak Lee,et al. Sparse deep belief net model for visual area V2 , 2007, NIPS.
[15] Jun Morimoto,et al. Learning CPG-based Biped Locomotion with a Policy Gradient Method: Application to a Humanoid Robot , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..
[16] Stefan Schaal,et al. 2008 Special Issue: Reinforcement learning of motor skills with policy gradients , 2008 .
[17] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[18] Stefan Schaal,et al. Learning and generalization of motor skills by learning from demonstration , 2009, 2009 IEEE International Conference on Robotics and Automation.
[19] Yasemin Altun,et al. Relative Entropy Policy Search , 2010 .
[20] Byron Boots,et al. Closing the learning-planning loop with predictive state representations , 2009, Int. J. Robotics Res..
[21] Carl E. Rasmussen,et al. Learning to Control a Low-Cost Manipulator using Data-Efficient Reinforcement Learning , 2011, Robotics: Science and Systems.
[22] Jan Peters,et al. Reinforcement Learning to Adjust Robot Movements to New Situations , 2010, IJCAI.
[23] Byron Boots,et al. Closing the learning-planning loop with predictive state representations , 2011, Int. J. Robotics Res..
[24] J. Andrew Bagnell,et al. Agnostic System Identification for Model-Based Reinforcement Learning , 2012, ICML.
[25] Martin A. Riedmiller,et al. Autonomous reinforcement learning on raw visual input data in a real world application , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).
[26] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[27] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[28] Jan Peters,et al. A Survey on Policy Search for Robotics , 2013, Found. Trends Robotics.
[29] Martin A. Riedmiller,et al. Acquiring visual servoing reaching and grasping skills using neural reinforcement learning , 2013, The 2013 International Joint Conference on Neural Networks (IJCNN).
[30] Oliver Brock,et al. State Representation Learning in Robotics: Using Prior Knowledge about Physical Interaction , 2014, Robotics: Science and Systems.
[31] Sergey Levine,et al. Learning Neural Network Policies with Guided Policy Search under Unknown Dynamics , 2014, NIPS.
[32] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[33] Klaus Obermayer,et al. Autonomous Learning of State Representations for Control: An Emerging Field Aims to Autonomously Learn State Representations for Reinforcement Learning Agents from Their Real-World Sensor Observations , 2015, KI - Künstliche Intelligenz.
[34] Martin A. Riedmiller,et al. Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images , 2015, NIPS.
[35] Yann LeCun,et al. Stacked What-Where Auto-encoders , 2015, ArXiv.
[36] Nolan Wagener,et al. Learning contact-rich manipulation skills with guided policy search , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).
[37] Yann LeCun,et al. Learning to Linearize Under Uncertainty , 2015, NIPS.
[38] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[40] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..