暂无分享,去创建一个
[1] Jan Peters,et al. Learning motor primitives for robotics , 2009, 2009 IEEE International Conference on Robotics and Automation.
[2] Andrea d'Avella,et al. Learned parametrized dynamic movement primitives with shared synergies for controlling robotic and musculoskeletal systems , 2013, Front. Comput. Neurosci..
[3] Sergey Levine,et al. Divide-and-Conquer Reinforcement Learning , 2017, ICLR.
[4] Xinyu Liu,et al. Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics , 2017, Robotics: Science and Systems.
[5] Jan Peters,et al. Reinforcement learning vs human programming in tetherball robot games , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[6] Sergey Levine,et al. Guided Policy Search , 2013, ICML.
[7] Doina Precup,et al. Learning Options in Reinforcement Learning , 2002, SARA.
[8] Jun Morimoto,et al. Deep Encoder-Decoder Networks for Mapping Raw Images to Dynamic Movement Primitives , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[9] Yee Whye Teh,et al. Distral: Robust multitask reinforcement learning , 2017, NIPS.
[10] Jan Peters,et al. Hierarchical Relative Entropy Policy Search , 2014, AISTATS.
[11] Jitendra Malik,et al. Learning to Poke by Poking: Experiential Learning of Intuitive Physics , 2016, NIPS.
[12] Ron Sun,et al. From implicit skills to explicit knowledge: a bottom-up model of skill learning , 2001, Cogn. Sci..
[13] Silvio Savarese,et al. Variable Impedance Control in End-Effector Space: An Action Space for Reinforcement Learning in Contact-Rich Tasks , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[14] Stefan Schaal,et al. Reinforcement Learning With Sequences of Motion Primitives for Robust Manipulation , 2012, IEEE Transactions on Robotics.
[15] Oliver Kroemer,et al. Learning to select and generalize striking movements in robot table tennis , 2012, AAAI Fall Symposium: Robots Learning Interactively from Human Teachers.
[16] Jun Nakanishi,et al. Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors , 2013, Neural Computation.
[17] Stefan Schaal,et al. Skill learning and task outcome prediction for manipulation , 2011, 2011 IEEE International Conference on Robotics and Automation.
[18] Sergey Levine,et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation , 2018, CoRL.
[19] Mohit Sharma,et al. A Modular Robotic Arm Control Stack for Research: Franka-Interface and FrankaPy , 2020, ArXiv.
[20] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[21] Sergey Levine,et al. Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection , 2016, ISER.
[22] Razvan Pascanu,et al. Policy Distillation , 2015, ICLR.
[23] CalinonSylvain. A tutorial on task-parameterized movement learning and retrieval , 2016 .
[24] Sylvain Calinon,et al. A tutorial on task-parameterized movement learning and retrieval , 2015, Intelligent Service Robotics.
[25] Abhinav Gupta,et al. Neural Dynamic Policies for End-to-End Sensorimotor Learning , 2020, NeurIPS.
[26] Darwin G. Caldwell,et al. Kernelized movement primitives , 2017, Int. J. Robotics Res..
[27] Maximilian Karl,et al. Dynamic movement primitives in latent space of time-dependent variational autoencoders , 2016, 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids).
[28] Sergey Levine,et al. Data-Efficient Hierarchical Reinforcement Learning , 2018, NeurIPS.
[29] Darwin G. Caldwell,et al. Robot motor skill coordination with EM-based Reinforcement Learning , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[30] S. Schaal. Dynamic Movement Primitives -A Framework for Motor Control in Humans and Humanoid Robotics , 2006 .
[31] Darwin G. Caldwell,et al. Learning-based control strategy for safe human-robot interaction exploiting task and robot redundancies , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[32] Stefan Schaal,et al. Learning and generalization of motor skills by learning from demonstration , 2009, 2009 IEEE International Conference on Robotics and Automation.
[33] Tom Schaul,et al. FeUdal Networks for Hierarchical Reinforcement Learning , 2017, ICML.
[34] Abhinav Gupta,et al. Supersizing self-supervision: Learning to grasp from 50K tries and 700 robot hours , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[35] Jun Morimoto,et al. Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives , 2010, IEEE Transactions on Robotics.
[36] Daniel Kappler,et al. Riemannian Motion Policies , 2018, ArXiv.
[37] Rich Caruana,et al. Model compression , 2006, KDD '06.
[38] Byron Boots,et al. RMPflow: A Computational Graph for Automatic Motion Policy Generation , 2018, WAFR.
[39] Satoshi Endo,et al. Dynamic Movement Primitives for Human-Robot interaction: Comparison with human behavioral observation , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[40] Jan Peters,et al. Reinforcement Learning to Adjust Robot Movements to New Situations , 2010, IJCAI.
[41] Sergey Levine,et al. End-to-End Training of Deep Visuomotor Policies , 2015, J. Mach. Learn. Res..
[42] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[43] Tucker Hermans,et al. Active Learning of Probabilistic Movement Primitives , 2019, 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids).