On the Tractability of SHAP Explanations
暂无分享,去创建一个
Guy Van den Broeck | Dan Suciu | Maximilian Schleich | Anton Lykov | Dan Suciu | Maximilian Schleich | A. Lykov
[1] Michael Wooldridge,et al. A Tractable and Expressive Class of Marginal Contribution Nets and Its Applications , 2008, Math. Log. Q..
[2] Pierre Marquis,et al. A Knowledge Compilation Map , 2002, J. Artif. Intell. Res..
[3] Guy Van den Broeck,et al. Skolemization for Weighted First-Order Model Counting , 2013, KR.
[4] Bart Selman,et al. A New Approach to Model Counting , 2005, SAT.
[5] Mukund Sundararajan,et al. The many Shapley values for model explanation , 2019, ICML.
[6] Suresh Venkatasubramanian,et al. Problems with Shapley-value-based explanations as feature importance measures , 2020, ICML.
[7] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[8] Henry A. Kautz,et al. Performing Bayesian Inference by Weighted Model Counting , 2005, AAAI.
[9] Kjersti Aas,et al. Explaining individual predictions when features are dependent: More accurate approximations to Shapley values , 2019, Artif. Intell..
[10] Ankur Taly,et al. Explainable AI in Industry , 2019, KDD.
[11] Adnan Darwiche,et al. On probabilistic inference by weighted model counting , 2008, Artif. Intell..
[12] Randal E. Bryant,et al. Graph-Based Algorithms for Boolean Function Manipulation , 1986, IEEE Transactions on Computers.
[13] Guy Van den Broeck,et al. On Tractable Computation of Expected Predictions , 2019, NeurIPS.
[14] Marcelo Arenas,et al. The Tractability of SHAP-Score-Based Explanations over Deterministic and Decomposable Boolean Circuits. , 2020 .
[15] Pablo Barceló,et al. The Tractability of SHAP-scores over Deterministic and Decomposable Boolean Circuits , 2020, ArXiv.
[16] Steffen Rendle,et al. Factorization Machines , 2010, 2010 IEEE International Conference on Data Mining.
[17] Guy Van den Broeck,et al. What to Expect of Classifiers? Reasoning about Logistic Regression with Missing Features , 2019, IJCAI.
[18] L. Shapley,et al. The Shapley Value , 1994 .
[19] Dominik Janzing,et al. Feature relevance quantification in explainable AI: A causality problem , 2019, AISTATS.
[20] Guy Van den Broeck,et al. Handling Missing Data in Decision Trees: A Probabilistic Approach , 2020, ArXiv.
[21] Erik Strumbelj,et al. Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.
[22] Dan Suciu,et al. Causality-based Explanation of Classification Outcomes , 2020, DEEM@SIGMOD.
[23] Scott M. Lundberg,et al. Consistent Individualized Feature Attribution for Tree Ensembles , 2018, ArXiv.
[24] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[25] Guy Van den Broeck,et al. Symmetric Weighted First-Order Model Counting , 2014, PODS.
[26] J. Scott Provan,et al. The Complexity of Counting Cuts and of Computing the Probability that a Graph is Connected , 1983, SIAM J. Comput..
[27] Moshe Y. Vardi,et al. Treewidth in Verification: Local vs. Global , 2005, LPAR.
[28] Sameer Singh,et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods , 2020, AIES.
[29] Ankur Taly,et al. The Explanation Game: Explaining Machine Learning Models Using Shapley Values , 2020, CD-MAKE.
[30] Guy Van den Broeck,et al. Learning Logistic Circuits , 2019, AAAI.
[31] Michael I. Jordan,et al. On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes , 2001, NIPS.
[32] Hugh Chen,et al. From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.