暂无分享,去创建一个
[1] Amit Dhurandhar,et al. Generating Contrastive Explanations with Monotonic Attribute Functions , 2019, ArXiv.
[2] Jaime G. Carbonell,et al. Learning by experimentation: the operator refinement method , 1990 .
[3] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[4] Yu Zhang,et al. Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.
[5] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[6] Tim Miller,et al. Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.
[7] Subbarao Kambhampati,et al. The Emerging Landscape of Explainable Automated Planning & Decision Making , 2020, IJCAI.
[8] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[9] Yoav Freund,et al. A Short Introduction to Boosting , 1999 .
[10] Leslie Pack Kaelbling,et al. From Skills to Symbols: Learning Symbolic Representations for Abstract High-Level Planning , 2018, J. Artif. Intell. Res..
[11] Michael Winikoff,et al. Debugging Agent Programs with Why?: Questions , 2017, AAMAS.
[12] Subbarao Kambhampati,et al. Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations , 2018, IJCAI.
[13] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[14] Judith Hylton. SAFE: , 1993 .
[15] Bradley Hayes,et al. Improving Robot Controller Transparency Through Autonomous Policy Explanation , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.
[16] Subbarao Kambhampati,et al. Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks , 2019, IJCAI.
[17] Blai Bonet,et al. A Concise Introduction to Models and Methods for Automated Planning , 2013, A Concise Introduction to Models and Methods for Automated Planning.
[18] Thomas Keller,et al. Abstractions for Planning with State-Dependent Action Costs , 2016, ICAPS.
[19] Mark A. Neerincx,et al. Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences , 2018, IJCAI 2018.
[20] Wojciech Zaremba,et al. OpenAI Gym , 2016, ArXiv.
[21] Andrew Anderson,et al. Explaining Reinforcement Learning to Mere Mortals: An Empirical Study , 2019, IJCAI.
[22] Jonathan Schaeffer,et al. Using Abstraction for Planning in Sokoban , 2002, Computers and Games.
[23] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[24] Brendan Juba,et al. Efficient, Safe, and Probably Approximately Complete Learning of Action Models , 2017, IJCAI.