暂无分享,去创建一个
Qiangui Huang | Moritz Niendorf | Peter Ondruska | Ashesh Jain | Hugo Grimmett | Maciej Wolczyk | Yawei Ye | Matt Vitelli | Yan Chang | Bla.zej Osi'nski | Peter Ondruska | Ashesh Jain | Hugo Grimmett | Matt Vitelli | Qiangui Huang | Maciej Wołczyk | Bla.zej Osi'nski | Yawei Ye | Yan-Xia Chang | Moritz Niendorf
[1] Julius Ziegler,et al. Optimal trajectories for time-critical street scenarios using discretized terminal manifolds , 2012, Int. J. Robotics Res..
[2] Mayank Bansal,et al. ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst , 2018, Robotics: Science and Systems.
[3] Geoffrey J. Gordon,et al. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.
[4] Matthias Althoff,et al. Online Verification of Automated Road Vehicles Using Reachability Analysis , 2014, IEEE Transactions on Robotics.
[5] Sergio Casas,et al. End-To-End Interpretable Neural Motion Planner , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Michael C. Yip,et al. Motion Planning Networks , 2018, 2019 International Conference on Robotics and Automation (ICRA).
[7] Steven M. LaValle,et al. RRT-connect: An efficient approach to single-query path planning , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).
[8] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[9] John M. Dolan,et al. A behavioral planning framework for autonomous driving , 2014, 2014 IEEE Intelligent Vehicles Symposium Proceedings.
[10] Amnon Shashua,et al. On a Formal Model of Safe and Scalable Self-driving Cars , 2017, ArXiv.
[11] Markus Wulfmeier,et al. Maximum Entropy Deep Inverse Reinforcement Learning , 2015, 1507.04888.
[12] Changchun Liu,et al. Baidu Apollo EM Motion Planner , 2018, ArXiv.
[13] Anind K. Dey,et al. Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.
[14] David Janz,et al. Learning to Drive in a Day , 2018, 2019 International Conference on Robotics and Automation (ICRA).
[15] Emilio Frazzoli,et al. Sampling-based algorithms for optimal motion planning , 2011, Int. J. Robotics Res..
[16] S. Levine,et al. Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems , 2020, ArXiv.
[17] Panagiotis Tsiotras,et al. Machine learning guided exploration for sampling-based motion planning algorithms , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[18] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[19] Henggang Cui,et al. Deep Kinematic Models for Kinematically Feasible Vehicle Trajectory Predictions , 2019, 2020 IEEE International Conference on Robotics and Automation (ICRA).
[20] Lydia Tapia,et al. RL-RRT: Kinodynamic Motion Planning via Learning Reachability Estimators From RL Policies , 2019, IEEE Robotics and Automation Letters.
[21] Sergey Levine,et al. Causal Confusion in Imitation Learning , 2019, NeurIPS.
[22] Sanjiv Singh,et al. The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, George Air Force Base, Victorville, California, USA , 2009, The DARPA Urban Challenge.
[23] Martin A. Riedmiller,et al. Learning to Drive a Real Car in 20 Minutes , 2007, 2007 Frontiers in the Convergence of Bioscience and Information Technologies.
[24] Emilio Frazzoli,et al. A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles , 2016, IEEE Transactions on Intelligent Vehicles.
[25] Dragomir Anguelov,et al. VectorNet: Encoding HD Maps and Agent Dynamics From Vectorized Representation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Raquel Urtasun,et al. MP3: A Unified Model to Map, Perceive, Predict and Plan , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Michael Stolz,et al. Search-Based Optimal Motion Planning for Automated Driving , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[28] Dean Pomerleau,et al. ALVINN, an autonomous land vehicle in a neural network , 2015 .
[29] Peter Ondruska,et al. Autonomy 2.0: Why is self-driving always 5 years away? , 2021, ArXiv.
[30] Amnon Shashua,et al. Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving , 2016, ArXiv.
[31] Alex Kendall,et al. Urban Driving with Conditional Imitation Learning , 2019, 2020 IEEE International Conference on Robotics and Automation (ICRA).
[32] Leonidas J. Guibas,et al. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Wenlong Fu,et al. Model-based reinforcement learning: A survey , 2018 .
[34] Oliver Scheel,et al. SimNet: Learning Reactive Self-driving Simulations from Real-world Observations , 2021, 2021 IEEE International Conference on Robotics and Automation (ICRA).
[35] Emilio Frazzoli,et al. Intention-Aware Motion Planning , 2013, WAFR.
[36] Francisco Eiras,et al. PILOT: Efficient Planning by Imitation Learning and Optimisation for Safe Autonomous Driving , 2021, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[37] Sebastian Thrun,et al. Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.