暂无分享,去创建一个
[1] P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .
[2] Xiting Wang,et al. Towards better analysis of machine learning models: A visual analytics perspective , 2017, Vis. Informatics.
[3] Ladislav Hluchý,et al. Machine Learning and Deep Learning frameworks and libraries for large-scale data mining: a survey , 2019, Artificial Intelligence Review.
[4] Sébastien Gambs,et al. Fairwashing: the risk of rationalization , 2019, ICML.
[5] Inioluwa Deborah Raji,et al. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products , 2019, AIES.
[6] Daniel W. Apley,et al. Visualizing the effects of predictor variables in black box supervised learning models , 2016, Journal of the Royal Statistical Society: Series B (Statistical Methodology).
[7] Przemyslaw Biecek,et al. Simpler is better: Lifting interpretability-performance trade-off via automated feature engineering , 2021, Decis. Support Syst..
[8] Francisco Herrera,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2020, Inf. Fusion.
[9] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[10] Klaus-Robert Müller,et al. Explanations can be manipulated and geometry is to blame , 2019, NeurIPS.
[11] Bernd Bischl,et al. iml: An R package for Interpretable Machine Learning , 2018, J. Open Source Softw..
[12] M. Braga,et al. Exploratory Data Analysis , 2018, Encyclopedia of Social Network Analysis and Mining. 2nd Ed..
[14] Sebastian Gehrmann,et al. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models , 2019, ArXiv.
[15] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[16] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[17] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[18] Rich Caruana,et al. InterpretML: A Unified Framework for Machine Learning Interpretability , 2019, ArXiv.
[19] G. G. Stokes. "J." , 1890, The New Yale Book of Quotations.
[20] Tie-Yan Liu,et al. LightGBM: A Highly Efficient Gradient Boosting Decision Tree , 2017, NIPS.
[21] Max Kuhn,et al. Building Predictive Models in R Using the caret Package , 2008 .
[22] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[23] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[24] Lars Kai Hansen,et al. A simple defense against adversarial attacks on heatmap explanations , 2020, ArXiv.
[25] Bernd Bischl,et al. mlr: Machine Learning in R , 2016, J. Mach. Learn. Res..
[26] Qian Yang,et al. Designing Theory-Driven User-Centric Explainable AI , 2019, CHI.
[27] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[28] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[29] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[30] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[31] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[32] Amro Najjar,et al. A Historical Perspective on Cognitive Science and Its Influence on XAI Research , 2019, EXTRAAMAS@AAMAS.
[33] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[34] Megan Kurka,et al. Machine Learning Interpretability with H2O Driverless AI , 2019 .
[35] Mireia Ribera,et al. Can we do better explanations? A proposal of user-centered explainable AI , 2019, IUI Workshops.
[36] Daniel Le Métayer,et al. A Multi-layered Approach for Interactive Black-box Explanations , 2020 .
[37] Carlos Eduardo Scheidegger,et al. Certifying and Removing Disparate Impact , 2014, KDD.
[38] Cynthia Rudin,et al. All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously , 2019, J. Mach. Learn. Res..
[39] Chandan Sengupta. The Model Development Process , 2012 .
[40] Amit Dhurandhar,et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques , 2019, ArXiv.
[41] Mark Bilandzic,et al. Bringing Transparency Design into Practice , 2018, IUI.
[42] Inioluwa Deborah Raji,et al. Model Cards for Model Reporting , 2018, FAT.
[43] A. Cann. Replication , 2003, Principles of Molecular Virology.
[44] Martin Wattenberg,et al. The What-If Tool: Interactive Probing of Machine Learning Models , 2019, IEEE Transactions on Visualization and Computer Graphics.
[45] H. Singer. An Historical Perspective , 1995 .
[46] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[47] Nicholas Schmidt,et al. A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing , 2020, Inf..
[48] Brandon M. Greenwell. pdp: An R Package for Constructing Partial Dependence Plots , 2017, R J..
[49] Ameet Talwalkar,et al. MLlib: Machine Learning in Apache Spark , 2015, J. Mach. Learn. Res..
[50] Tim Miller,et al. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences , 2017, ArXiv.
[51] Przemyslaw Biecek,et al. Explanations of model predictions with live and breakDown packages , 2018, R J..
[52] Przemyslaw Biecek,et al. modelStudio: Interactive Studio with Explanations for ML Predictive Models , 2019, J. Open Source Softw..
[53] Przemyslaw Biecek,et al. DALEX: explainers for complex predictive models , 2018, J. Mach. Learn. Res..
[54] Sameer Singh,et al. Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods , 2020, AIES.
[55] Bernd Bischl,et al. mlr3: A modern object-oriented machine learning framework in R , 2019, J. Open Source Softw..