暂无分享,去创建一个
[1] Tommi S. Jaakkola,et al. On the Robustness of Interpretability Methods , 2018, ArXiv.
[2] Suresh Venkatasubramanian,et al. Problems with Shapley-value-based explanations as feature importance measures , 2020, ICML.
[3] Emil Pitkin,et al. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.
[4] Adnan Darwiche,et al. The Same-Decision Probability: A New Tool for Decision Making , 2012 .
[5] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[6] Adnan Darwiche,et al. An Exact Algorithm for Computing the Same-Decision Probability , 2013, IJCAI.
[7] Tom Claassen,et al. Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models , 2020, NeurIPS.
[8] Kjersti Aas,et al. Explaining predictive models using Shapley values and non-parametric vine copulas , 2021, Dependence Modeling.
[9] Zhi-Hua Zhou,et al. Isolation Forest , 2008, 2008 Eighth IEEE International Conference on Data Mining.
[10] Daniel Fryer,et al. Shapley values for feature selection: The good, the bad, and the axioms , 2021, IEEE Access.
[11] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[12] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[13] Sisi Ma,et al. Predictive and Causal Implications of using Shapley Value for Model Interpretation , 2020, CD@KDD.
[14] Joseph D. Janizek,et al. True to the Model or True to the Data? , 2020, ArXiv.
[15] Kjersti Aas,et al. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values , 2019, Artif. Intell..
[16] Erik Strumbelj,et al. An Efficient Explanation of Individual Classifications using Game Theory , 2010, J. Mach. Learn. Res..
[17] Scott Lundberg,et al. Explaining by Removing Explaining by Removing: A Unified Framework for Model Explanation , 2020 .
[18] Hugh Chen,et al. From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.
[19] Dominik Janzing,et al. Feature relevance quantification in explainable AI: A causality problem , 2019, AISTATS.
[20] Cl'ement B'enard,et al. Interpretable Random Forests via Rule Extraction , 2020, AISTATS.
[21] Bernd Bischl,et al. Grouped feature importance and combined features effect plot , 2021, Data Mining and Knowledge Discovery.