暂无分享,去创建一个
Volker Gruhn | Jonas Andrulis | Ole Meyer | Grégory Schott | Samuel Weinbach | V. Gruhn | Ole Meyer | Samuel Weinbach | Jonas Andrulis | Grégory Schott
[1] Gérard P. Cachon,et al. Game Theory in Supply Chain Analysis , 2004 .
[2] Andrea Ferrario,et al. In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions , 2019, Philosophy & Technology.
[3] Lars Niklasson,et al. Evolving decision trees using oracle guides , 2009, 2009 IEEE Symposium on Computational Intelligence and Data Mining.
[4] Li Zhao,et al. Reinforcement Learning for Relation Classification From Noisy Data , 2018, AAAI.
[5] Emily Chen,et al. How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation , 2018, ArXiv.
[6] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[7] Girish Chowdhary,et al. Robust Deep Reinforcement Learning with Adversarial Attacks , 2017, AAMAS.
[8] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[9] Gábor Orosz,et al. End-to-End Safe Reinforcement Learning through Barrier Functions for Safety-Critical Continuous Control Tasks , 2019, AAAI.
[10] Taehoon Kim,et al. Quantifying Generalization in Reinforcement Learning , 2018, ICML.
[11] Abhinav Verma,et al. Programmatically Interpretable Reinforcement Learning , 2018, ICML.
[12] Pedro Sequeira,et al. Interestingness Elements for Explainable Reinforcement Learning: Understanding Agents' Capabilities and Limitations , 2019, Artif. Intell..
[13] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[14] H. Chesbrough. Business Model Innovation: Opportunities and Barriers , 2010 .
[15] Xia Hu,et al. Techniques for interpretable machine learning , 2018, Commun. ACM.
[16] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[17] Sebastian Junges,et al. Safety-Constrained Reinforcement Learning for MDPs , 2015, TACAS.
[18] Michael Macedonia,et al. Computer Games and the Military: Two Views , 2002 .
[19] Jude W. Shavlik,et al. in Advances in Neural Information Processing , 1996 .
[20] Marty J. Wolf,et al. Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?” , 2011, Ethics and Information Technology.
[21] Thomas A. Runkler,et al. Interpretable Policies for Reinforcement Learning by Genetic Programming , 2017, Eng. Appl. Artif. Intell..
[22] Daniel Guo,et al. Agent57: Outperforming the Atari Human Benchmark , 2020, ICML.
[23] Clayton M. Christensen. The Innovator's Dilemma: When New Technologies Cause Great Firms to Fail , 2013 .
[24] Oliver Schulte,et al. Toward Interpretable Deep Reinforcement Learning with Linear Model U-Trees , 2018, ECML/PKDD.
[25] Valerie F. Reyna,et al. Educating Intuition , 2015, Current directions in psychological science.
[26] Andrea Saltelli,et al. Sensitivity Analysis for Importance Assessment , 2002, Risk analysis : an official publication of the Society for Risk Analysis.
[27] Kathryn Graziano. The innovator's dilemma: When new technologies cause great firms to fail , 1998 .
[28] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[29] Demis Hassabis,et al. Mastering the game of Go without human knowledge , 2017, Nature.
[30] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[31] Branislav L. Slantchev. On the Proper Use of Game-Theoretic Models in Conflict Studies , 2017 .
[32] Jin Wang,et al. Overview on DeepMind and Its AlphaGo Zero AI , 2018, ICBDE.
[33] R A Forder,et al. Military Operations Research: Quantitative Decision Making , 1997, J. Oper. Res. Soc..
[34] Yuandong Tian,et al. ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games , 2017, NIPS.
[35] Lennart Ljung,et al. Comparing different approaches to model error modeling in robust identification , 2002, Autom..
[36] Wojciech M. Czarnecki,et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.
[37] Harvey J. Greenberg,et al. Models, Methods, and Applications for Innovative Decision Making , 2006 .
[38] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[39] Michael Buro,et al. Real-Time Strategy Games: A New AI Research Challenge , 2003, IJCAI.
[40] Philip Bachman,et al. Deep Reinforcement Learning that Matters , 2017, AAAI.
[41] Tim Miller,et al. Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.
[42] Mark A. Neerincx,et al. Contrastive Explanations with Local Foil Trees , 2018, ICML 2018.
[43] Eric M. S. P. Veith,et al. Explainable Reinforcement Learning: A Survey , 2020, CD-MAKE.
[44] Jakub W. Pachocki,et al. Dota 2 with Large Scale Deep Reinforcement Learning , 2019, ArXiv.
[45] S. Sastry,et al. Adaptive Control: Stability, Convergence and Robustness , 1989 .
[46] Paul J. H. Schoemaker,et al. Forecasting and Scenario Planning: The Challenges of Uncertainty and Complexity , 2008 .
[47] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.