暂无分享,去创建一个
[1] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[2] M. I. V. Eale,et al. SLAVE TO THE ALGORITHM ? WHY A ‘ RIGHT TO AN EXPLANATION ’ IS PROBABLY NOT THE REMEDY YOU ARE LOOKING FOR , 2017 .
[3] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[4] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[5] Mark A. Neerincx,et al. Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences , 2018, IJCAI 2018.
[6] Davide Castelvecchi,et al. Can we open the black box of AI? , 2016, Nature.
[7] 김기경. Accountability , 2019, Encyclopedia of Food and Agricultural Ethics.
[8] M. J. Robeer,et al. Contrastive Explanation for Machine Learning , 2018 .
[9] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[10] Peter A. Flach,et al. Conversational Explanations of Machine Learning Predictions Through Class-contrastive Counterfactual Statements , 2018, IJCAI.
[11] Chris Russell,et al. Explaining Explanations in AI , 2018, FAT.
[12] Erik Weber,et al. Remote causes, bad explanations? , 2002 .
[13] Trevor Darrell,et al. Generating Counterfactual Explanations with Natural Language , 2018, ICML 2018.
[14] Mark A. Neerincx,et al. Contrastive Explanations with Local Foil Trees , 2018, ICML 2018.
[15] Michael I. Jordan,et al. Advances in Neural Information Processing Systems 30 , 1995 .
[16] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.