Objective learning from human demonstrations
暂无分享,去创建一个
[1] Peter A. Beling,et al. Inverse reinforcement learning with Gaussian process , 2011, Proceedings of the 2011 American Control Conference.
[2] Timothy Bretl,et al. Inverse optimal control for differentially flat systems with application to locomotion modeling , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).
[3] Katja D. Mombaur,et al. Inverse optimal control based identification of optimality criteria in whole-body human walking on level ground , 2016, 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob).
[4] Stefano Ermon,et al. Generative Adversarial Imitation Learning , 2016, NIPS.
[5] Dana Kulic,et al. Anthropomorphic Movement Analysis and Synthesis: A Survey of Methods and Applications , 2016, IEEE Transactions on Robotics.
[6] Melanie N. Zeilinger,et al. Predictive Modeling by Infinite-Horizon Constrained Inverse Optimal Control with Application to a Human Manipulation Task , 2018, ArXiv.
[7] Dana H. Ballard,et al. Modular inverse reinforcement learning for visuomotor behavior , 2013, Biological Cybernetics.
[8] Christian Vollmer,et al. Learning to navigate through crowded environments , 2010, 2010 IEEE International Conference on Robotics and Automation.
[9] Henk Nijmeijer,et al. Robot Programming by Demonstration , 2010, SIMPAR.
[10] Pravesh Ranchod,et al. Learning Options from Demonstration using Skill Segmentation , 2020, 2020 International SAUPEC/RobMech/PRASA Conference.
[11] Kai Oliver Arras,et al. Learning socially normative robot navigation behaviors with Bayesian inverse reinforcement learning , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).
[12] Matthieu Geist,et al. Inverse Reinforcement Learning through Structured Classification , 2012, NIPS.
[13] Xiaodong Li,et al. Learning a Super Mario controller from examples of human play , 2014, 2014 IEEE Congress on Evolutionary Computation (CEC).
[14] Joelle Pineau,et al. Socially Adaptive Path Planning in Human Environments Using Inverse Reinforcement Learning , 2016, Int. J. Soc. Robotics.
[15] Peter Englert,et al. Inverse KKT - Learning Cost Functions of Manipulation Tasks from Demonstrations , 2017, ISRR.
[16] M. Latash,et al. Finger Coordination Under Artificial Changes in Finger Strength Feedback: A Study Using Analytical Inverse Optimization , 2011, Journal of motor behavior.
[17] Gianni Ferretti,et al. Generation of human walking paths , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[18] Jean-Paul Laumond,et al. From human to humanoid locomotion—an inverse optimal control approach , 2010, Auton. Robots.
[19] Bernhard Schölkopf,et al. Learning strategies in table tennis using inverse reinforcement learning , 2014, Biological Cybernetics.
[20] Peter Englert,et al. Learning manipulation skills from a single demonstration , 2018, Int. J. Robotics Res..
[21] Wolfram Burgard,et al. Socially compliant mobile robot navigation via inverse reinforcement learning , 2016, Int. J. Robotics Res..
[22] Timothy Bretl,et al. Inverse optimal control for deterministic continuous-time nonlinear systems , 2013, 52nd IEEE Conference on Decision and Control.
[23] Christos Dimitrakakis,et al. Preference elicitation and inverse reinforcement learning , 2011, ECML/PKDD.
[24] Christos Dimitrakakis,et al. Bayesian Multitask Inverse Reinforcement Learning , 2011, EWRL.
[25] Marc Toussaint,et al. Direct Loss Minimization Inverse Optimal Control , 2015, Robotics: Science and Systems.
[26] E. Todorov,et al. Inverse Optimality Design for Biological Movement Systems , 2011 .
[27] Stefan Schaal,et al. A Generalized Path Integral Control Approach to Reinforcement Learning , 2010, J. Mach. Learn. Res..
[28] Mohamed Medhat Gaber,et al. Imitation Learning , 2017, ACM Comput. Surv..
[29] Matthieu Geist,et al. A Cascaded Supervised Learning Approach to Inverse Reinforcement Learning , 2013, ECML/PKDD.
[30] Han-Pang Huang,et al. A mobile robot that understands pedestrian spatial behaviors , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[31] Brett Browning,et al. A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..
[32] Vincent Bonnet,et al. Human Arm Motion Analysis Based on the Inverse Optimization Approach , 2018, 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob).
[33] Stephen P. Boyd,et al. Imputing a convex objective function , 2011, 2011 IEEE International Symposium on Intelligent Control.
[34] Abdelkader El Kamel,et al. Neural inverse reinforcement learning in autonomous navigation , 2016, Robotics Auton. Syst..
[35] Dmitry Berenson,et al. Learning Constraints From Locally-Optimal Demonstrations Under Cost Function Uncertainty , 2020, IEEE Robotics and Automation Letters.
[36] Sergey Levine,et al. Nonlinear Inverse Reinforcement Learning with Gaussian Processes , 2011, NIPS.
[37] Vladimir M. Zatsiorsky,et al. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods , 2011, Biological Cybernetics.
[38] Martial Hebert,et al. Activity Forecasting , 2012, ECCV.
[39] Katja D. Mombaur,et al. Humanoid gait generation in complex environments based on template models and optimality principles learned from human beings , 2018, Int. J. Robotics Res..
[40] Stefan Schaal,et al. A Robustness Analysis of Inverse Optimal Control of Bipedal Walking , 2019, IEEE Robotics and Automation Letters.
[41] Kee-Eung Kim,et al. Inverse Reinforcement Learning in Partially Observable Environments , 2009, IJCAI.
[42] Jonathan Feng-Shun Lin,et al. Inverse optimal control with time-varying objectives: application to human jumping movement analysis , 2020, Scientific Reports.
[43] Johannes P. Schlöder,et al. Estimating Parameters in Optimal Control Problems , 2012, SIAM J. Sci. Comput..
[44] Stefan Schaal,et al. Learning from Demonstration , 1996, NIPS.
[45] Jonathan P. How,et al. Bayesian Nonparametric Inverse Reinforcement Learning , 2012, ECML/PKDD.
[46] Tristan Perez,et al. Finite-horizon inverse optimal control for discrete-time nonlinear systems , 2018, Autom..
[47] Dana Kulic,et al. Human motion segmentation using cost weights recovered from inverse optimal control , 2016, 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids).
[48] B. Anderson,et al. Nonlinear regulator theory and an inverse optimal control problem , 1973 .
[49] J. Andrew Bagnell,et al. Maximum margin planning , 2006, ICML.
[50] Brahim Chaib-draa,et al. Bootstrapping Apprenticeship Learning , 2010, NIPS.
[51] Melanie N. Zeilinger,et al. Convex Formulations and Algebraic Solutions for Linear Quadratic Inverse Optimal Control Problems , 2018, 2018 European Control Conference (ECC).
[52] Hannes Sommer,et al. Predicting actions to act predictably: Cooperative partial motion planning with maximum entropy models , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[53] Aude Billard,et al. An inverse optimization approach to understand human acquisition of kinematic coordination in bimanual fine manipulation tasks , 2020, Biological Cybernetics.
[54] Han Zhang,et al. Inverse Optimal Control for Finite-Horizon Discrete-time Linear Quadratic Regulator Under Noisy Output , 2019, 2019 IEEE 58th Conference on Decision and Control (CDC).
[55] Oliver Kroemer,et al. Structured Apprenticeship Learning , 2012, ECML/PKDD.
[56] Gentiane Venture,et al. Human arm optimal motion analysis in industrial screwing task , 2014, 5th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics.
[57] Rohan R. Paleja,et al. Joint Goal and Strategy Inference across Heterogeneous Demonstrators via Reward Network Distillation , 2020, 2020 15th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[58] Zoran Popović,et al. Learning behavior styles with inverse reinforcement learning , 2010, SIGGRAPH 2010.
[59] Sylvain Miossec,et al. Gait analysis using optimality criteria imputed from human data , 2017 .
[60] Fuchun Sun,et al. Survey of imitation learning for robotic manipulation , 2019, International Journal of Intelligent Robotics and Applications.
[61] Kee-Eung Kim,et al. MAP Inference for Bayesian Inverse Reinforcement Learning , 2011, NIPS.
[62] Pieter Abbeel,et al. Apprenticeship learning via inverse reinforcement learning , 2004, ICML.
[63] Jongeun Choi,et al. Solutions to the Inverse LQR Problem With Application to Biological Systems Analysis , 2015, IEEE Transactions on Control Systems Technology.
[64] Sebastian Thrun,et al. Apprenticeship learning for motion planning with application to parking lot navigation , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[65] Katja Mombaur,et al. Forward and Inverse Optimal Control of Bipedal Running , 2013 .
[66] J. Betts. Survey of Numerical Methods for Trajectory Optimization , 1998 .
[67] Scott Niekum,et al. Machine Teaching for Inverse Reinforcement Learning: Algorithms and Applications , 2018, AAAI.
[68] Jee-Hwan Ryu,et al. Inverse discounted-based LQR algorithm for learning human movement behaviors , 2018, Applied Intelligence.
[69] Katja Mombaur,et al. On the Relevance of Common Humanoid Gait Generation Strategies in Human Locomotion: An Inverse Optimal Control Approach , 2017 .
[70] Midhun P Unni,et al. Neuromechanical Cost Functionals Governing Motor Control for Early Screening of Motor Disorders , 2017, Front. Bioeng. Biotechnol..
[71] Sergey Levine,et al. Feature Construction for Inverse Reinforcement Learning , 2010, NIPS.
[72] Siddhartha S. Srinivasa,et al. Planning-based prediction for pedestrians , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[73] Csaba Szepesvári,et al. Training parsers by inverse reinforcement learning , 2009, Machine Learning.
[74] Siddhartha S. Srinivasa,et al. Imitation learning for locomotion and manipulation , 2007, 2007 7th IEEE-RAS International Conference on Humanoid Robots.
[75] Aude Billard,et al. Donut as I do: Learning from failed demonstrations , 2011, 2011 IEEE International Conference on Robotics and Automation.
[76] Pieter Abbeel,et al. Autonomous Helicopter Aerobatics through Apprenticeship Learning , 2010, Int. J. Robotics Res..
[77] Jonathan P. How,et al. Scalable reward learning from demonstration , 2013, 2013 IEEE International Conference on Robotics and Automation.
[78] Mark L Latash,et al. An analytical approach to the problem of inverse optimization with additive objective functions: an application to human prehension , 2010, Journal of mathematical biology.
[79] Pieter Abbeel,et al. Apprenticeship learning for helicopter control , 2009, CACM.
[80] Gentiane Venture,et al. Analysis of Affective Human Motion During Functional Task Performance: An Inverse Optimal Control Approach , 2019, 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids).
[81] Kee-Eung Kim,et al. Nonparametric Bayesian Inverse Reinforcement Learning for Multiple Reward Functions , 2012, NIPS.
[82] Mohsen Davoudi,et al. From inverse optimal control to inverse reinforcement learning: A historical review , 2020, Annu. Rev. Control..
[83] C. K. Liu,et al. Learning physics-based motion style with nonlinear inverse optimization , 2005, SIGGRAPH 2005.
[84] Sonia Chernova,et al. Recent Advances in Robot Learning from Demonstration , 2020, Annu. Rev. Control. Robotics Auton. Syst..
[85] Michael H. Bowling,et al. Apprenticeship learning using linear programming , 2008, ICML '08.
[86] Hui Qian,et al. Convergence analysis of an incremental approach to online inverse reinforcement learning , 2011, Journal of Zhejiang University SCIENCE C.
[87] Jan Peters,et al. Reinforcement learning in robotics: A survey , 2013, Int. J. Robotics Res..
[88] Gergely V. Záruba,et al. Inverse reinforcement learning for decentralized non-cooperative multiagent systems , 2012, 2012 IEEE International Conference on Systems, Man, and Cybernetics (SMC).
[89] Miaoliang Zhu,et al. Modified reward function on abstract features in inverse reinforcement learning , 2010, Journal of Zhejiang University SCIENCE C.
[90] Timothy Bretl,et al. A convex approach to inverse optimal control and its application to modeling human locomotion , 2012, 2012 IEEE International Conference on Robotics and Automation.
[91] Zhen Kan,et al. Skill transfer learning for autonomous robots and human-robot cooperation: A survey , 2020, Robotics Auton. Syst..
[92] Dirk Wollherr,et al. An Inverse Optimal Control Approach to Explain Human Arm Reaching Control Based on Multiple Internal Models , 2018, Scientific Reports.
[93] Said M. Megahed,et al. Adaptive learning of human motor behaviors: An evolving inverse optimal control approach , 2016, Eng. Appl. Artif. Intell..
[94] Er Meng Joo,et al. A survey of inverse reinforcement learning techniques , 2012 .
[95] Michael Ulbrich,et al. A bilevel optimization approach to obtain optimal cost functions for human arm movements , 2012 .
[96] Marco Pavone,et al. Risk-sensitive Inverse Reinforcement Learning via Coherent Risk Models , 2017, Robotics: Science and Systems.
[97] Pedro U. Lima,et al. Inverse reinforcement learning with evaluation , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..
[98] Katja Mombaur,et al. Inverse Optimal Control as a Tool to Understand Human Movement , 2017, Geometric and Numerical Foundations of Movements.
[99] Hsien-I Lin,et al. ACTIVE INTENTION INFERENCE FOR ROBOT-HUMAN COLLABORATION , 2017 .
[100] David Silver,et al. Learning from Demonstration for Autonomous Navigation in Complex Unstructured Terrain , 2010, Int. J. Robotics Res..
[101] Kian Hsiang Low,et al. Inverse Reinforcement Learning with Locally Consistent Reward Functions , 2015, NIPS.
[102] Francesco Nori,et al. Evidence for Composite Cost Functions in Arm Movement Planning: An Inverse Optimal Control Approach , 2011, PLoS Comput. Biol..
[103] Timothy Bretl,et al. Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[104] Dana Kulic,et al. Inverse Optimal Control for Multiphase Cost Functions , 2019, IEEE Transactions on Robotics.
[105] Scott Niekum,et al. Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrations , 2019, CoRL.