暂无分享,去创建一个
[1] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[2] Ankur Taly,et al. Explainable machine learning in deployment , 2019, FAT*.
[3] Timothy W. Finin,et al. The need for user models in generating expert system explanation , 1988 .
[4] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[5] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[6] S. Leurgans,et al. Parkinson disease with old-age onset: a comparative study with subjects with middle-age onset. , 2003, Archives of neurology.
[7] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[8] Donglin Zeng,et al. Personalized Dose Finding Using Outcome Weighted Learning , 2016, Journal of the American Statistical Association.
[9] Fei Wang,et al. Deep Learning in Medicine-Promise, Progress, and Challenges. , 2019, JAMA internal medicine.
[10] Fei Wang,et al. Deep learning for healthcare: review, opportunities and challenges , 2018, Briefings Bioinform..
[11] Joseph Y. Halpern,et al. Causes and explanations: A structural-model approach , 2000 .
[12] Geoffrey E. Hinton,et al. Distilling a Neural Network Into a Soft Decision Tree , 2017, CEx@AI*IA.
[13] Emil Pitkin,et al. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.
[14] Vineeth N. Balasubramanian,et al. Neural Network Attributions: A Causal Perspective , 2019, ICML.
[15] Julapa Jagtiani,et al. The Roles of Alternative Data and Machine Learning in Fintech Lending: Evidence from the Lendingclub Consumer Platform , 2018, Financial Management.
[16] Xintao Wu,et al. Fairness through Equality of Effort , 2019, WWW.
[17] Sebastian Thrun,et al. Extracting Rules from Artifical Neural Networks with Distributed Representations , 1994, NIPS.
[18] Debashis Ghosh,et al. A Boosting Algorithm for Estimating Generalized Propensity Scores with Continuous Treatments , 2015, Journal of causal inference.
[19] Max A. Little,et al. Accurate Telemonitoring of Parkinson's Disease Progression by Noninvasive Speech Tests , 2009, IEEE Transactions on Biomedical Engineering.
[20] Dumitru Erhan,et al. The (Un)reliability of saliency methods , 2017, Explainable AI.
[21] Huan Liu,et al. Understanding Neural Networks via Rule Extraction , 1995, IJCAI.
[22] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[23] Risto Miikkulainen,et al. GRADE: Machine Learning Support for Graduate Admissions , 2013, AI Mag..
[24] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[25] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[26] Brendan J. Frey,et al. Machine Learning in Genomic Medicine: A Review of Computational Problems and Data Sets , 2016, Proceedings of the IEEE.
[27] Valerie Tarasuk,et al. Liberal trade policy and food insecurity across the income distribution: an observational analysis in 132 countries, 2014–17 , 2020, The Lancet Global Health.
[28] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[29] Chris Russell,et al. Explaining Explanations in AI , 2018, FAT.
[30] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[31] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[32] Vasant Honavar,et al. Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality , 2019, WWW.
[33] Justin A. Sirignano,et al. Deep Learning for Mortgage Risk , 2016, Journal of Financial Econometrics.
[34] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[35] Jude W. Shavlik,et al. Extracting refined rules from knowledge-based neural networks , 2004, Machine Learning.
[36] Uri Shalit,et al. Learning Representations for Counterfactual Inference , 2016, ICML.
[37] William J. Clancey,et al. Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI , 2019, ArXiv.
[38] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[39] Ilya Shpitser,et al. Deriving Bounds And Inequality Constraints Using Logical Relations Among Counterfactuals , 2020, UAI.
[40] Trevor Hastie,et al. Causal Interpretations of Black-Box Models , 2019, Journal of business & economic statistics : a publication of the American Statistical Association.
[41] Elias Bareinboim,et al. Equality of Opportunity in Classification: A Causal Approach , 2018, NeurIPS.
[42] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[43] J. Robins,et al. Marginal Structural Models and Causal Inference in Epidemiology , 2000, Epidemiology.
[44] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[45] Michael Rabadi,et al. Kernel Methods for Machine Learning , 2015 .
[46] Matt J. Kusner,et al. Counterfactual Fairness , 2017, NIPS.
[47] Anand Singh Jalal,et al. Suspicious human activity recognition: a review , 2017, Artificial Intelligence Review.
[48] Joshua D. Angrist,et al. Identification of Causal Effects Using Instrumental Variables , 1993 .
[49] Jenna Wiens,et al. Machine Learning for Healthcare: On the Verge of a Major Shift in Healthcare Epidemiology , 2018, Clinical infectious diseases : an official publication of the Infectious Diseases Society of America.
[50] Hans-J. Briegel,et al. Machine learning \& artificial intelligence in the quantum domain , 2017, ArXiv.
[51] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[52] Daniel W. Davies,et al. Machine learning for molecular and materials science , 2018, Nature.
[53] Jenna Burrell,et al. How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .
[54] K. Borgwardt,et al. Machine Learning in Medicine , 2015, Mach. Learn. under Resour. Constraints Vol. 3.
[55] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[56] Ilya Shpitser,et al. Fair Inference on Outcomes , 2017, AAAI.
[57] J. Zubizarreta. Stable Weights that Balance Covariates for Estimation With Incomplete Outcome Data , 2015 .
[58] Raimo Tuomela,et al. A Pragmatic Theory of Explanation , 1984 .
[59] Erik Strumbelj,et al. Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.
[60] M. J. van der Laan,et al. Statistical Applications in Genetics and Molecular Biology Super Learner , 2010 .
[61] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[62] M. J. van der Laan,et al. Practice of Epidemiology Improving Propensity Score Estimators ’ Robustness to Model Misspecification Using Super Learner , 2015 .
[63] Guillermo Sapiro,et al. A Shared Vision for Machine Learning in Neuroscience , 2018, The Journal of Neuroscience.
[64] Anders Larrabee Sønderlund,et al. The efficacy of learning analytics interventions in higher education: A systematic review , 2018, Br. J. Educ. Technol..
[65] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[66] Noemi Kreif,et al. Machine learning in policy evaluation: new tools for causal inference , 2019, Oxford Research Encyclopedia of Economics and Finance.
[67] Christophe Croux,et al. Important factors determining Fintech loan default: Evidence from a lendingclub consumer platform , 2020 .
[68] Andre Esteva,et al. A guide to deep learning in healthcare , 2019, Nature Medicine.
[69] G. Imbens,et al. The Propensity Score with Continuous Treatments , 2005 .
[70] Atul J Butte,et al. A call for deep-learning healthcare , 2019, Nature Medicine.
[71] Charu C. Aggarwal,et al. On the Surprising Behavior of Distance Metrics in High Dimensional Spaces , 2001, ICDT.
[72] M. J. Laan,et al. Targeted Learning: Causal Inference for Observational and Experimental Data , 2011 .
[73] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[74] Diogo M. Camacho,et al. Next-Generation Machine Learning for Biological Networks , 2018, Cell.
[75] Joseph Y. Halpern,et al. Causes and Explanations: A Structural-Model Approach. Part II: Explanations , 2001, The British Journal for the Philosophy of Science.
[76] Stefania Albanesi,et al. Predicting Consumer Default: A Deep Learning Approach , 2019, SSRN Electronic Journal.
[77] Max Welling,et al. Causal Effect Inference with Deep Latent-Variable Models , 2017, NIPS 2017.
[78] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[79] Ribana Roscher,et al. Explainable Machine Learning for Scientific Insights and Discoveries , 2019, IEEE Access.
[80] Olivier Bachem,et al. Recent Advances in Autoencoder-Based Representation Learning , 2018, ArXiv.
[81] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[82] Wesley C. Salmon,et al. Van Fraassen on Explanation , 1987 .
[83] Illtyd Trethowan. Causality , 1938 .
[84] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[85] D. Rubin,et al. The central role of the propensity score in observational studies for causal effects , 1983 .
[86] Wesley C. Salmon,et al. Causality and Explanation , 1998 .
[87] Charles F. Manski,et al. Identification for Prediction and Decision , 2008 .
[88] Suresh Venkatasubramanian,et al. Problems with Shapley-value-based explanations as feature importance measures , 2020, ICML.
[89] Vasant Honavar,et al. Algorithmic Bias in Recidivism Prediction: A Causal Perspective , 2019, AAAI.
[90] Kosuke Imai,et al. Causal Inference With General Treatment Regimes , 2004 .
[91] E Mjolsness,et al. Machine learning for science: state of the art and future prospects. , 2001, Science.
[92] Joachim Diederich,et al. Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..
[93] Robert Pelzer,et al. Policing of Terrorism Using Data from Social Media , 2018, European Journal for Security Research.
[94] J. Hendler,et al. Amplify scientific discovery with artificial intelligence , 2014, Science.
[95] L. Shapley. A Value for n-person Games , 1988 .
[96] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[97] Sebastián Ventura,et al. Educational Data Mining: A Review of the State of the Art , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[98] Markus H. Gross,et al. Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation , 2019, ICML.
[99] Walter Karlen,et al. Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks , 2018, ArXiv.
[100] J. Woodward,et al. Scientific Explanation and the Causal Structure of the World , 1988 .
[101] S. Lipovetsky,et al. Analysis of regression in game theory approach , 2001 .
[102] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[103] Wojciech Samek,et al. Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..
[104] Chad Hazlett,et al. Covariate balancing propensity score for a continuous treatment: Application to the efficacy of political advertisements , 2018 .