Learning Quadcopter Maneuvers with Concurrent Methods of Policy Optimization
暂无分享,去创建一个
[1] Shigenobu Kobayashi,et al. Reinforcement learning of walking behavior for a four-legged robot , 2001, Proceedings of the 40th IEEE Conference on Decision and Control (Cat. No.01CH37228).
[2] Claire J. Tomlin,et al. Precision flight control for a multi-vehicle quadrotor helicopter testbed , 2011 .
[3] Yuval Tassa,et al. Simulation tools for model-based robotics: Comparison of Bullet, Havok, MuJoCo, ODE and PhysX , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).
[4] Hasegawa Osamu,et al. Associative-memory-recall-based control system for learning hovering manoeuvres , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).
[5] Honglak Lee,et al. Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning , 2014, NIPS.
[6] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[7] Markus Hehn,et al. A flying inverted pendulum , 2011, 2011 IEEE International Conference on Robotics and Automation.
[8] Surya P. N. Singh,et al. V-REP: A versatile and scalable robot simulation framework , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[9] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[10] Samir Bouabdallah,et al. Design and control of quadrotors with application to autonomous flying , 2007 .
[11] David W. Murray,et al. Parallel Tracking and Mapping on a camera phone , 2009, 2009 8th IEEE International Symposium on Mixed and Augmented Reality.
[12] Alexander J. Smola,et al. On Variance Reduction in Stochastic Gradient Descent and its Asynchronous Variants , 2015, NIPS.
[13] Stefan Schaal,et al. 2008 Special Issue: Reinforcement learning of motor skills with policy gradients , 2008 .
[14] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[15] Quoc V. Le,et al. On optimization methods for deep learning , 2011, ICML.
[16] Stephen J. Wright,et al. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent , 2011, NIPS.
[17] Marc'Aurelio Ranzato,et al. Large Scale Distributed Deep Networks , 2012, NIPS.
[18] Shane Legg,et al. Massively Parallel Methods for Deep Reinforcement Learning , 2015, ArXiv.
[19] Georg Heigold,et al. Asynchronous stochastic optimization for sequence training of deep neural networks , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[20] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[21] Richard S. Sutton,et al. Neuronlike adaptive elements that can solve difficult learning control problems , 1983, IEEE Transactions on Systems, Man, and Cybernetics.
[22] Jonathan P. How,et al. Cooperative Vision Based Estimation and Tracking Using Multiple UAVs , 2007 .
[23] Alexander J. Smola,et al. Parallelized Stochastic Gradient Descent , 2010, NIPS.
[24] Patrick M. Pilarski,et al. Model-Free reinforcement learning with continuous action in practice , 2012, 2012 American Control Conference (ACC).
[25] Lydia Tapia,et al. Reinforcement learning for balancing a flying inverted pendulum , 2014, Proceeding of the 11th World Congress on Intelligent Control and Automation.
[26] Peter C. Salmon,et al. Mobile Bot Swarms: They're closer than you might think! , 2015, IEEE Consumer Electronics Magazine.
[27] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[28] Jeff G. Schneider,et al. Autonomous helicopter control using reinforcement learning policy search methods , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).
[29] Pieter Abbeel,et al. Autonomous Helicopter Aerobatics through Apprenticeship Learning , 2010, Int. J. Robotics Res..