Specifying and Interpreting Reinforcement Learning Policies through Simulatable Machine Learning
暂无分享,去创建一个
[1] Pierre Sermanet,et al. Grounding Language in Play , 2020, ArXiv.
[2] Chitta Baral,et al. Language-Conditioned Imitation Learning for Robot Manipulation Tasks , 2020, NeurIPS.
[3] Shen Li,et al. Bayesian Inference of Temporal Task Specifications from Demonstrations , 2018, NeurIPS.
[4] Demis Hassabis,et al. Grounded Language Learning in a Simulated 3D World , 2017, ArXiv.
[5] Anna Goldenberg,et al. What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use , 2019, MLHC.
[6] Sorin Grigorescu,et al. A Survey of Deep Learning Techniques for Autonomous Driving , 2020, J. Field Robotics.
[7] Percy Liang,et al. Data Recombination for Neural Semantic Parsing , 2016, ACL.
[8] R. Mayer,et al. Three Facets of Visual and Verbal Learners: Cognitive Ability, Cognitive Style, and Learning Preference. , 2003 .
[9] Shie Mannor,et al. Graying the black box: Understanding DQNs , 2016, ICML.
[10] Fred D. Davis. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology , 1989, MIS Q..
[11] B. Jones. BOUNDED RATIONALITY , 1999 .
[12] Mirella Lapata,et al. Language to Logical Form with Neural Attention , 2016, ACL.
[13] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[14] Andrea Lockerd Thomaz,et al. Robot Learning from Human Teachers , 2014, Robot Learning from Human Teachers.
[15] Luke S. Zettlemoyer,et al. Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions , 2013, TACL.
[16] H. Friedrich,et al. In: Probramming by Demonstration vs. Learning from Examples Workshop at Ml'95 Obtaining Good Performance from a Bad Teacher , 1995 .
[17] Sheila A. McIlraith,et al. Using Reward Machines for High-Level Task Specification and Decomposition in Reinforcement Learning , 2018, ICML.
[18] Paul Smolensky,et al. Connectionist AI, symbolic AI, and the brain , 1987, Artificial Intelligence Review.
[19] Yunyao Li,et al. Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle , 2021, Conference on Designing Interactive Systems.
[20] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[21] Yunfeng Zhang,et al. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making , 2020, FAT*.
[22] C. Flavián,et al. Integrating trust and personal values into the Technology Acceptance Model: The case of e-government services adoption , 2012 .
[23] Matthew Gombolay,et al. Learning from Suboptimal Demonstration via Self-Supervised Reward Regression , 2020, ArXiv.
[24] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[25] Q. Liao,et al. Questioning the AI: Informing Design Practices for Explainable AI User Experiences , 2020, CHI.
[26] Matthew R. Walter,et al. Understanding Natural Language Commands for Robotic Navigation and Mobile Manipulation , 2011, AAAI.
[27] Stefanie Tellex,et al. Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities , 2017, Robotics: Science and Systems.
[28] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[29] Giulio Sandini,et al. Humanizing Human-Robot Interaction: On the Importance of Mutual Understanding , 2018, IEEE Technology and Society Magazine.
[30] Ross A. Knepper,et al. Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning , 2018, Robotics: Science and Systems.
[31] Mark O. Riedl,et al. Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.
[32] Petter Nilsson,et al. Toward Specification-Guided Active Mars Exploration for Cooperative Robot Teams , 2018, Robotics: Science and Systems.
[33] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[34] Andrew Bennett,et al. Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction , 2018, EMNLP.
[35] Dan Klein,et al. Modular Multitask Reinforcement Learning with Policy Sketches , 2016, ICML.
[36] Li Wang,et al. The Robotarium: A remotely accessible swarm robotics research testbed , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).
[37] Eric Horvitz,et al. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff , 2019, AAAI.
[38] Alberto Suárez,et al. Globally Optimal Fuzzy Decision Trees for Classification and Regression , 1999, IEEE Trans. Pattern Anal. Mach. Intell..
[39] Hadas Kress-Gazit,et al. Translating Structured English to Robot Controllers , 2008, Adv. Robotics.
[40] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[41] Demetra Evangelou,et al. Orientations and motivations: Are you a “people person,” a “thing person,” or both? , 2012 .
[42] Sergey Levine,et al. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations , 2017, Robotics: Science and Systems.
[43] J. L. Peterson,et al. Deep Neural Network Initialization With Decision Trees , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[44] Alvin Cheung,et al. Learning a Neural Semantic Parser from User Feedback , 2017, ACL.
[45] Christian Muise,et al. Evaluating the Interpretability of the Knowledge Compilation Map: Communicating Logical Statements Effectively , 2019, IJCAI.
[46] Diyi Yang,et al. Hierarchical Attention Networks for Document Classification , 2016, NAACL.
[47] Luke S. Zettlemoyer,et al. Learning to Parse Natural Language Commands to a Robot Control System , 2012, ISER.
[48] Peter Stone,et al. Improving Grounded Natural Language Understanding through Human-Robot Dialog , 2019, 2019 International Conference on Robotics and Automation (ICRA).
[49] Sung-Hyun Son,et al. Optimization Methods for Interpretable Differentiable Decision Trees Applied to Reinforcement Learning , 2020, AISTATS.
[50] Pushmeet Kohli,et al. Learning to Understand Goal Specifications by Modelling Reward , 2018, ICLR.
[51] Stefanie Tellex,et al. Sequence-to-Sequence Language Grounding of Non-Markovian Task Specifications , 2018, Robotics: Science and Systems.
[52] Laura G. Militello,et al. Macrocognition, Mental Models, and Cognitive Task Analysis Methodology , 2017 .
[53] Matthew C. Gombolay,et al. ProLoNets: Neural-encoding Human Experts' Domain Knowledge to Warm Start Reinforcement Learning , 2019, ArXiv.
[54] Ross A. Knepper,et al. Learning to Map Natural Language Instructions to Physical Quadcopter Control using Simulated Flight , 2019, CoRL.
[55] Tim Miller,et al. Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.
[56] Pieter Abbeel,et al. Apprenticeship learning via inverse reinforcement learning , 2004, ICML.
[57] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[58] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[59] Hadas Kress-Gazit,et al. Robot-Initiated Specification Repair through Grounded Language Interaction , 2017, ArXiv.
[60] Maya Cakmak,et al. Power to the People: The Role of Humans in Interactive Machine Learning , 2014, AI Mag..
[61] Thomas G. Dietterich. Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition , 1999, J. Artif. Intell. Res..
[62] Sheila A. McIlraith,et al. Teaching Multiple Tasks to an RL Agent using LTL , 2018, AAMAS.
[63] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.