暂无分享,去创建一个
[1] Suresh Venkatasubramanian,et al. Problems with Shapley-value-based explanations as feature importance measures , 2020, ICML.
[2] Guy Van den Broeck,et al. On the Tractability of SHAP Explanations , 2020, AAAI.
[3] Yoshua Bengio,et al. Towards Causal Representation Learning , 2021, ArXiv.
[4] Adnan Darwiche,et al. A Symbolic Approach to Explaining Bayesian Network Classifiers , 2018, IJCAI.
[5] Sanjeev Arora,et al. Probabilistic checking of proofs: a new characterization of NP , 1998, JACM.
[6] Alistair Sinclair,et al. Algorithms for Random Generation and Counting: A Markov Chain Approach , 1993, Progress in Theoretical Computer Science.
[7] Xiaotie Deng,et al. On the Complexity of Cooperative Solution Concepts , 1994, Math. Oper. Res..
[8] Asaf Shabtai,et al. When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures , 2019, 2020 International Joint Conference on Neural Networks (IJCNN).
[9] Pierre Senellart,et al. Connecting Knowledge Compilation Classes Width Parameters , 2018, Theory of Computing Systems.
[10] Adnan Darwiche,et al. Formal Verification of Bayesian Network Classifiers , 2018, PGM.
[11] Hugh Chen,et al. From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.
[12] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[13] Naoya Takeishi,et al. On Anomaly Interpretation via Shapley Values , 2020, ArXiv.
[14] Sanjeev Arora,et al. Computational Complexity: A Modern Approach , 2009 .
[15] Carsten Lund,et al. Proof verification and hardness of approximation problems , 1992, Proceedings., 33rd Annual Symposium on Foundations of Computer Science.
[16] Oded Goldreich,et al. Computational complexity: a conceptual perspective , 2008, SIGA.
[17] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[18] Anthony Hunter,et al. On the measure of conflicts: Shapley Inconsistency Values , 2010, Artif. Intell..
[19] Leopoldo E. Bertossi,et al. The Shapley Value of Tuples in Query Answering , 2019, ICDT.
[20] Juan A. Nepomuceno,et al. An application of the Shapley value to the analysis of co-expression networks , 2018, Appl. Netw. Sci..
[21] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[22] Adnan Darwiche,et al. On Symbolically Encoding the Behavior of Random Forests , 2020, ArXiv.
[23] Guy Van den Broeck,et al. Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits , 2020, ICML.
[24] Peter Struss,et al. Model-based Problem Solving , 2008, Handbook of Knowledge Representation.
[25] Richard M. Karp,et al. Monte-Carlo Approximation Algorithms for Enumeration Problems , 1989, J. Algorithms.
[26] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[27] L. Shapley,et al. The Shapley Value , 1994 .
[28] Leslie G. Valiant,et al. The Complexity of Computing the Permanent , 1979, Theor. Comput. Sci..
[29] Dan Suciu,et al. Causality-based Explanation of Classification Outcomes , 2020, DEEM@SIGMOD.
[30] Su-In Lee,et al. Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression , 2020, AISTATS.
[31] Adnan Darwiche,et al. On The Reasons Behind Decisions , 2020, ECAI.
[32] L. Shapley. A Value for n-person Games , 1988 .
[33] Yoshua Bengio,et al. CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning , 2020, ICLR.
[34] László Lovász,et al. Interactive proofs and the hardness of approximating cliques , 1996, JACM.
[35] Klaus-Robert Müller,et al. Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications , 2021, Proceedings of the IEEE.
[36] Guy Van den Broeck,et al. Smoothing Structured Decomposable Circuits , 2019, NeurIPS.
[37] Shubham Rathi,et al. Generating Counterfactual and Contrastive Explanations using SHAP , 2019, ArXiv.
[38] Ankur Taly,et al. The Explanation Game: Explaining Machine Learning Models Using Shapley Values , 2020, CD-MAKE.
[39] Marcelo Arenas,et al. The Tractability of SHAP-Score-Based Explanations over Deterministic and Decomposable Boolean Circuits. , 2020 .
[40] Adnan Darwiche,et al. Verifying Binarized Neural Networks by Angluin-Style Learning , 2019, SAT.
[41] Erik Strumbelj,et al. An Efficient Explanation of Individual Classifications using Game Theory , 2010, J. Mach. Learn. Res..
[42] Adnan Darwiche,et al. On the Tractable Counting of Theory Models and its Application to Truth Maintenance and Belief Revision , 2001, J. Appl. Non Class. Logics.
[43] Pierre Marquis,et al. A Knowledge Compilation Map , 2002, J. Artif. Intell. Res..
[44] J. Scott Provan,et al. The Complexity of Counting Cuts and of Computing the Probability that a Graph is Connected , 1983, SIAM J. Comput..
[45] Ker-I Ko,et al. Some Observations on the Probabilistic Algorithms and NP-hard Problems , 1982, Inf. Process. Lett..
[46] Cynthia Rudin,et al. An Interpretable Model with Globally Consistent Explanations for Credit Risk , 2018, ArXiv.
[47] U. Faigle,et al. The Shapley value for cooperative games under precedence constraints , 1992 .
[48] Nicholas R. Jennings,et al. Efficient Computation of the Shapley Value for Game-Theoretic Network Centrality , 2014, J. Artif. Intell. Res..
[49] Adnan Darwiche,et al. On Tractable Representations of Binary Neural Networks , 2020, KR.