A learning model for personalized adaptive cruise control
暂无分享,去创建一个
Gang Wang | Xin Chen | Chao Lu | Jianwei Gong | Yong Zhai | G. Wang | Jian-wei Gong | Chao Lu | Yong Zhai | Xin Chen
[1] Andrew Y. Ng,et al. Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.
[2] Lei Zhang,et al. An Adaptive Longitudinal Driving Assistance System Based on Driver Characteristics , 2013, IEEE Transactions on Intelligent Transportation Systems.
[3] Pieter Abbeel,et al. Apprenticeship learning via inverse reinforcement learning , 2004, ICML.
[4] Bako Rajaonah,et al. Driver's behaviors and human-machine interactions characterization for the design of an advanced driving assistance system , 2004, 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583).
[5] Sergey Levine,et al. Continuous Inverse Optimal Control with Locally Optimal Examples , 2012, ICML.
[6] Yeung Yam,et al. Performance evaluation and optimization of human control strategy , 2002, Robotics Auton. Syst..
[7] Sebastian Thrun,et al. Towards fully autonomous driving: Systems and algorithms , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).
[8] Francisco S. Melo,et al. Q -Learning with Linear Function Approximation , 2007, COLT.
[9] Andrew W. Moore,et al. Reinforcement Learning: A Survey , 1996, J. Artif. Intell. Res..
[10] Julius Ziegler,et al. Making Bertha Drive—An Autonomous Journey on a Historic Route , 2014, IEEE Intelligent Transportation Systems Magazine.
[11] Martin A. Riedmiller,et al. A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.
[12] T.,et al. Training Feedforward Networks with the Marquardt Algorithm , 2004 .
[13] Azim Eskandarian,et al. Research advances in intelligent collision avoidance and adaptive cruise control , 2003, IEEE Trans. Intell. Transp. Syst..
[14] Feng Gao,et al. A comprehensive review of the development of adaptive cruise control systems , 2010 .
[15] Steven J. Bradtke,et al. Reinforcement Learning Applied to Linear Quadratic Regulation , 1992, NIPS.
[16] S.H.G. ten Hagen. Continuous State Space Q-Learning for control of Nonlinear Systems , 2001 .
[17] Kee-Eung Kim,et al. Bayesian Nonparametric Feature Construction for Inverse Reinforcement Learning , 2013, IJCAI.
[18] William Whittaker,et al. Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.
[19] Yun Li,et al. Patents, software, and hardware for PID control: an overview and analysis of the current art , 2006, IEEE Control Systems.
[20] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[21] Junqiang Xi,et al. A Learning-Based Approach for Lane Departure Warning Systems With a Personalized Driver Model , 2017, IEEE Transactions on Vehicular Technology.
[22] Wolfram Burgard,et al. Learning driving styles for autonomous vehicles from demonstration , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).
[23] Ben J. A. Kröse,et al. Neural Q-learning , 2003, Neural Computing & Applications.
[24] Shimon Whiteson,et al. Inverse Reinforcement Learning from Failure , 2016, AAMAS.
[25] Rajesh Rajamani,et al. Vehicle dynamics and control , 2005 .
[26] Alberto Broggi,et al. The TerraMax autonomous vehicle , 2006, J. Field Robotics.
[27] Francesco Borrelli,et al. A Learning-Based Framework for Velocity Control in Autonomous Driving , 2016, IEEE Transactions on Automation Science and Engineering.
[28] Anca D. Dragan,et al. Planning for Autonomous Cars that Leverage Effects on Human Actions , 2016, Robotics: Science and Systems.
[29] Alberto Broggi,et al. PROUD—Public Road Urban Driverless-Car Test , 2015, IEEE Transactions on Intelligent Transportation Systems.