暂无分享,去创建一个
Sergey Levine | Nicholas Rhinehart | Rowan McAllister | S. Levine | Nicholas Rhinehart | R. McAllister
[1] Geoffrey J. Gordon,et al. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.
[2] Stewart Worrall,et al. Naturalistic Driver Intention and Path Prediction Using Recurrent Neural Networks , 2018, IEEE Transactions on Intelligent Transportation Systems.
[3] Alexey Dosovitskiy,et al. End-to-End Driving Via Conditional Imitation Learning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[4] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[5] Shigeki Sugano,et al. Rethinking Self-driving: Multi-task Knowledge for Better Generalization and Accident Explanation Ability , 2018, ArXiv.
[6] Vladlen Koltun,et al. Learning to Act by Predicting the Future , 2016, ICLR.
[7] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[8] Ken Goldberg,et al. Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation , 2017, ICRA.
[9] Silvio Savarese,et al. Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[10] Carl E. Rasmussen,et al. PILCO: A Model-Based and Data-Efficient Approach to Policy Search , 2011, ICML.
[11] Sergey Levine,et al. Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review , 2018, ArXiv.
[12] Kenneth Y. Goldberg,et al. Learning Deep Policies for Robot Bin Picking by Simulating Robust Grasping Sequences , 2017, CoRL.
[13] Javier Alonso-Mora,et al. Planning and Decision-Making for Autonomous Vehicles , 2018, Annu. Rev. Control. Robotics Auton. Syst..
[14] Dean Pomerleau,et al. ALVINN, an autonomous land vehicle in a neural network , 2015 .
[15] Kyunghyun Cho,et al. Query-Efficient Imitation Learning for End-to-End Simulated Driving , 2017, AAAI.
[16] Anca D. Dragan,et al. Planning for Autonomous Cars that Leverage Effects on Human Actions , 2016, Robotics: Science and Systems.
[17] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[18] Wei Zhan,et al. A Fast Integrated Planning and Control Framework for Autonomous Driving via Imitation Learning , 2017, Volume 3: Modeling and Validation; Multi-Agent and Networked Systems; Path Planning and Motion Control; Tracking Control Systems; Unmanned Aerial Vehicles (UAVs) and Application; Unmanned Ground and Aerial Vehicles; Vibration in Mechanical Systems; Vibrat.
[19] Nicholas Rhinehart,et al. First-Person Activity Forecasting with Online Inverse Reinforcement Learning , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[20] Peter Englert,et al. Probabilistic model-based imitation learning , 2013, Adapt. Behav..
[21] Andreas Geiger,et al. Conditional Affordance Learning for Driving in Urban Environments , 2018, CoRL.
[22] Anind K. Dey,et al. Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.
[23] Richard S. Sutton,et al. Model-Based Reinforcement Learning with an Approximate, Learned Model , 1996 .
[24] Steven M. LaValle,et al. Planning algorithms , 2006 .
[25] Dariu M. Gavrila,et al. Human motion trajectory prediction: a survey , 2019, Int. J. Robotics Res..
[26] Emanuel Todorov,et al. Linearly-solvable Markov decision problems , 2006, NIPS.
[27] Sebastian Thrun,et al. Learning to Play the Game of Chess , 1994, NIPS.
[28] Paul Vernaza,et al. r2p2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting , 2018, ECCV.
[29] Byron Boots,et al. Truncated Horizon Policy Search: Combining Reinforcement Learning & Imitation Learning , 2018, ICLR.
[30] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[31] Katherine Rose Driggs-Campbell,et al. DropoutDAgger: A Bayesian Approach to Safe Imitation Learning , 2017, ArXiv.
[32] Emilio Frazzoli,et al. A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles , 2016, IEEE Transactions on Intelligent Vehicles.
[33] Kris M. Kitani,et al. Forecasting Interactive Dynamics of Pedestrians with Fictitious Play , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Heiga Zen,et al. Parallel WaveNet: Fast High-Fidelity Speech Synthesis , 2017, ICML.
[35] Philip H. S. Torr,et al. DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Byron Boots,et al. Differentiable MPC for End-to-end Planning and Control , 2018, NeurIPS.
[37] E. S. Pearson,et al. On the Problem of the Most Efficient Tests of Statistical Hypotheses , 1933 .
[38] Allan Jabri,et al. Universal Planning Networks: Learning Generalizable Representations for Visuomotor Control , 2018, ICML.
[39] Marco Pavone,et al. Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).
[40] Eder Santana,et al. Exploring the Limitations of Behavior Cloning for Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[41] J. Andrew Bagnell,et al. Efficient Reductions for Imitation Learning , 2010, AISTATS.
[42] Eric P. Xing,et al. CIRL: Controllable Imitative Reinforcement Learning for Vision-based Self-driving , 2018, ECCV.
[43] Germán Ros,et al. CARLA: An Open Urban Driving Simulator , 2017, CoRL.
[44] J. Andrew Bagnell,et al. Reinforcement and Imitation Learning via Interactive No-Regret Learning , 2014, ArXiv.
[45] Pieter Abbeel,et al. Value Iteration Networks , 2016, NIPS.
[46] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[47] R. McCann. Existence and uniqueness of monotone measure-preserving maps , 1995 .