Explaining Model Behavior with Global Causal Analysis
暂无分享,去创建一个
[1] Kathleen C. Fraser,et al. Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models , 2022, TRUSTNLP.
[2] David M. Markowitz,et al. PassivePy: A Tool to Automatically Identify Passive Voice in Big Text Data , 2022, Journal of Consumer Psychology.
[3] Luca Longo,et al. A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods , 2021, Frontiers in Artificial Intelligence.
[4] A. Chandar,et al. Post-hoc Interpretability for Neural NLP: A Survey , 2021, ACM Computing Surveys.
[5] Shafiq R. Joty,et al. Reliability Testing for Natural Language Processing Systems , 2021, ACL.
[6] Sainyam Galhotra,et al. Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals , 2021, SIGMOD Conference.
[7] C. Ouyang,et al. Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications , 2021, Inf. Fusion.
[8] N. C. Camgoz,et al. D’ya Like DAGs? A Survey on Structure Learning and Causal Discovery , 2021, ACM Comput. Surv..
[9] Riccardo Guidotti,et al. Evaluating local explanation methods on ground truth , 2021, Artif. Intell..
[10] Mohit Bansal,et al. Robustness Gym: Unifying the NLP Evaluation Landscape , 2021, NAACL.
[11] Trevor Hastie,et al. Causal Interpretations of Black-Box Models , 2019, Journal of business & economic statistics : a publication of the American Statistical Association.
[12] Marco F. Huber,et al. A Survey on the Explainability of Supervised Machine Learning , 2020, J. Artif. Intell. Res..
[13] Himabindu Lakkaraju,et al. Robust and Stable Black Box Explanations , 2020, ICML.
[14] I. Shpitser,et al. Explaining The Behavior Of Black-Box Prediction Algorithms With Causal Learning , 2020, ArXiv.
[15] Sameer Singh,et al. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList , 2020, ACL.
[16] Alan S. Cowen,et al. GoEmotions: A Dataset of Fine-Grained Emotions , 2020, ACL.
[17] Yoav Goldberg,et al. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness? , 2020, ACL.
[18] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[19] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[20] Jaime S. Cardoso,et al. Machine Learning Interpretability: A Survey on Methods and Metrics , 2019, Electronics.
[21] P. Spirtes,et al. Review of Causal Discovery Methods Based on Graphical Models , 2019, Front. Genet..
[22] Vineeth N. Balasubramanian,et al. Neural Network Attributions: A Causal Perspective , 2019, ICML.
[23] Yonatan Belinkov,et al. Analysis Methods in Neural Language Processing: A Survey , 2018, TACL.
[24] Lin Li,et al. How textual quality of online reviews affect classification performance: a case of deep learning sentiment analysis , 2018, Neural Computing and Applications.
[25] Tim Miller,et al. Contrastive explanation: a structural-model approach , 2018, The Knowledge Engineering Review.
[26] Vineet K Raghu,et al. Evaluation of Causal Structure Learning Methods on Mixed Data Types , 2018, CD@KDD.
[27] Saif Mohammad,et al. Obtaining Reliable Human Ratings of Valence, Arousal, and Dominance for 20,000 English Words , 2018, ACL.
[28] Mark A. Neerincx,et al. Contrastive Explanations with Local Foil Trees , 2018, ICML 2018.
[29] Mikko Koivisto,et al. Intersection-Validation: A Method for Evaluating Structure Learning without Ground Truth , 2018, AISTATS.
[30] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[31] Aaron J. Fisher,et al. All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously , 2018, J. Mach. Learn. Res..
[32] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[33] Jure Leskovec,et al. Interpretable & Explorable Approximations of Black Box Models , 2017, ArXiv.
[34] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[35] Osbert Bastani,et al. Interpreting Blackbox Models via Model Extraction , 2017, ArXiv.
[36] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[37] Clark Glymour,et al. A million variables and more: the Fast Greedy Equivalence Search algorithm for learning high-dimensional graphical causal models, with an application to functional magnetic resonance images , 2016, International Journal of Data Science and Analytics.
[38] D. Apley,et al. Visualizing the effects of predictor variables in black box supervised learning models , 2016, Journal of the Royal Statistical Society: Series B (Statistical Methodology).
[39] Carlos Guestrin,et al. Model-Agnostic Interpretability of Machine Learning , 2016, ArXiv.
[40] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[41] Joseph Y. Halpern. A Modification of the Halpern-Pearl Definition of Causality , 2015, IJCAI.
[42] Emil Pitkin,et al. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.
[43] Saif Mohammad,et al. CROWDSOURCING A WORD–EMOTION ASSOCIATION LEXICON , 2013, Comput. Intell..
[44] Thomas S. Richardson,et al. Learning high-dimensional directed acyclic graphs with latent and selection variables , 2011, 1104.5617.
[45] Jiji Zhang,et al. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias , 2008, Artif. Intell..
[46] Jiji Zhang,et al. Causal Reasoning with Ancestral Graphs , 2008, J. Mach. Learn. Res..
[47] Kevin P. Murphy,et al. Exact Bayesian structure learning from uncertain interventions , 2007, AISTATS.
[48] Aapo Hyvärinen,et al. A Linear Non-Gaussian Acyclic Model for Causal Discovery , 2006, J. Mach. Learn. Res..
[49] Constantin F. Aliferis,et al. The max-min hill-climbing Bayesian network structure learning algorithm , 2006, Machine Learning.
[50] Giles Hooker,et al. Discovering additive structure in black box functions , 2004, KDD.
[51] Tom Burr,et al. Causation, Prediction, and Search , 2003, Technometrics.
[52] David Maxwell Chickering,et al. Optimal Structure Identification With Greedy Search , 2002, J. Mach. Learn. Res..
[53] P. Spirtes,et al. Ancestral graph Markov models , 2002 .
[54] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[55] Jin Tian,et al. Causal Discovery from Changes , 2001, UAI.
[56] Joseph Y. Halpern,et al. Causes and Explanations: A Structural-Model Approach. Part I: Causes , 2000, The British Journal for the Philosophy of Science.
[57] D. Allen. Making things happen. , 2000, Nursing standard (Royal College of Nursing (Great Britain) : 1987).
[58] Gregory F. Cooper,et al. Causal Discovery from a Mixture of Experimental and Observational Data , 1999, UAI.
[59] Jude W. Shavlik,et al. Using Sampling and Queries to Extract Rules from Trained Neural Networks , 1994, ICML.
[60] J. Cussens,et al. Kernel-based Approach for Learning Causal Graphs from Mixed Data , 2020, PGM.
[61] R. Plutchik. A GENERAL PSYCHOEVOLUTIONARY THEORY OF EMOTION , 1980 .