SurvNAM: The machine learning survival model explanation
暂无分享,去创建一个
[1] Peter Tiňo,et al. A Survey on Neural Network Interpretability , 2020, IEEE Transactions on Emerging Topics in Computational Intelligence.
[2] Andrea Vedaldi,et al. Explanations for Attributing Deep Neural Network Predictions , 2019, Explainable AI.
[3] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[4] Artur S. d'Avila Garcez,et al. Measurable Counterfactual Local Explanations for Any Classifier , 2019, ECAI.
[5] F. Harrell,et al. Evaluating the yield of medical tests. , 1982, JAMA.
[6] Max Welling,et al. Attention-based Deep Multiple Instance Learning , 2018, ICML.
[7] J. Lafferty,et al. Sparse additive models , 2007, 0711.4555.
[8] Tuo Zhao,et al. Towards Understanding the Importance of Shortcut Connections in Residual Networks , 2019, NeurIPS.
[9] Leo Breiman,et al. Random Forests , 2001, Machine Learning.
[10] Insuk Sohn,et al. Analysis of Survival Data with Group Lasso , 2012, Commun. Stat. Simul. Comput..
[11] Nader Ebrahimi,et al. A semi-parametric generalization of the Cox proportional hazards regression model: Inference and applications , 2011, Comput. Stat. Data Anal..
[12] Lev V. Utkin,et al. SurvLIME: A method for explaining machine learning survival models , 2020, Knowl. Based Syst..
[13] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[14] Sabine Van Huffel,et al. Support vector methods for survival analysis: a comparison between ranking and regression approaches , 2011, Artif. Intell. Medicine.
[15] Mark A. Neerincx,et al. Contrastive Explanations with Local Foil Trees , 2018, ICML 2018.
[16] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[17] Amina Adadi,et al. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) , 2018, IEEE Access.
[18] C. Rudin,et al. Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges , 2021, Statistics Surveys.
[19] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Lev V. Utkin,et al. Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines , 2020, Knowl. Based Syst..
[21] Adler J. Perotte,et al. Deep Survival Analysis , 2016, MLHC.
[22] Xia Hu,et al. Techniques for interpretable machine learning , 2018, Commun. ACM.
[23] Andreas Ziegler,et al. Random forests for survival analysis using maximally selected rank statistics , 2016, ArXiv.
[24] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[25] D Faraggi,et al. A neural network model for survival data. , 1995, Statistics in medicine.
[26] Jaime S. Cardoso,et al. Machine Learning Interpretability: A Survey on Methods and Metrics , 2019, Electronics.
[27] Hong Wang,et al. Random survival forest with space extensions for censored data , 2017, Artif. Intell. Medicine.
[28] Jon Arni Steingrimsson,et al. Deep learning for survival outcomes , 2019, Statistics in medicine.
[29] Minh N. Vu,et al. Evaluating Explainers via Perturbation , 2019, ArXiv.
[30] Lev V. Utkin,et al. A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds , 2020, Neural Networks.
[31] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[32] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[33] Rich Caruana,et al. How Interpretable and Trustworthy are GAMs? , 2020, KDD.
[34] Andreas Bender,et al. A generalized additive model approach to time-to-event analysis , 2018 .
[35] Geoffrey E. Hinton,et al. Neural Additive Models: Interpretable Machine Learning with Neural Nets , 2020, NeurIPS.
[36] Ping Wang,et al. Machine Learning for Survival Analysis , 2019, ACM Comput. Surv..
[37] R. Tibshirani,et al. Generalized additive models for medical research , 1995, Statistical methods in medical research.
[38] Rich Caruana,et al. InterpretML: A Unified Framework for Machine Learning Interpretability , 2019, ArXiv.
[39] Marvin N. Wright,et al. Unbiased split variable selection for random survival forests using maximally selected rank statistics , 2017, Statistics in medicine.
[40] Junzhou Huang,et al. Deep convolutional neural network for survival analysis with pathological images , 2016, 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM).
[41] Patrick Pérez,et al. Explainability of vision-based autonomous driving systems: Review and challenges , 2021, ArXiv.
[42] Vijayan N. Nair,et al. Adaptive Explainable Neural Networks (Axnns) , 2020, ArXiv.
[43] Georg Langs,et al. Causability and explainability of artificial intelligence in medicine , 2019, WIREs Data Mining Knowl. Discov..
[44] Ziyan Wu,et al. Counterfactual Visual Explanations , 2019, ICML.
[45] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[46] Qiang Huang,et al. GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks , 2020, IEEE Transactions on Knowledge and Data Engineering.
[47] Erik Strumbelj,et al. An Efficient Explanation of Individual Classifications using Game Theory , 2010, J. Mach. Learn. Res..
[48] Sharath M. Shankaranarayana,et al. ALIME: Autoencoder Based Approach for Local Interpretability , 2019, IDEAL.
[49] Lev V. Utkin,et al. An Explanation Method for Black-Box Machine Learning Survival Models Using the Chebyshev Distance , 2020 .
[50] R. Tibshirani. The lasso method for variable selection in the Cox model. , 1997, Statistics in medicine.
[51] David W. Hosmer,et al. Applied Survival Analysis: Regression Modeling of Time-to-Event Data , 2008 .
[52] Michael Siebers,et al. Enriching Visual with Verbal Explanations for Relational Concepts - Combining LIME with Aleph , 2019, PKDD/ECML Workshops.
[53] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[54] Trevor Darrell,et al. Grounding Visual Explanations , 2018, ECCV.
[55] J. Friedman. Stochastic gradient boosting , 2002 .
[56] Qi Bi,et al. Differential Convolution Feature Guided Deep Multi-Scale Multiple Instance Learning for Aerial Scene Classification , 2021, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[57] Hao Helen Zhang,et al. Adaptive Lasso for Cox's proportional hazards model , 2007 .
[58] Nassir Navab,et al. An Efficient Training Algorithm for Kernel Survival Support Vector Machines , 2016, ArXiv.
[59] Matthias Schmid,et al. On the use of Harrell's C for clinical risk prediction via random survival forests , 2015, Expert Syst. Appl..
[60] Agus Sudjianto,et al. GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions , 2020, Pattern Recognit..
[61] Alejandro Barredo Arrieta,et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI , 2019, Inf. Fusion.
[62] Kai Yang,et al. A Deep Active Survival Analysis Approach for Precision Treatment Recommendations: Application of Prostate Cancer , 2018, Expert Syst. Appl..
[63] Shuhei Kaneko,et al. Enhancing the Lasso Approach for Developing a Survival Prediction Model Based on Gene Expression Data , 2015, Comput. Math. Methods Medicine.
[64] Rich Caruana,et al. Axiomatic Interpretability for Multiclass Additive Models , 2018, KDD.
[65] Vaishak Belle,et al. Principles and Practice of Explainable Machine Learning , 2020, Frontiers in Big Data.
[66] Chandan Singh,et al. Definitions, methods, and applications in interpretable machine learning , 2019, Proceedings of the National Academy of Sciences.
[67] Robert Tibshirani,et al. Survival analysis with high-dimensional covariates , 2010, Statistical methods in medical research.
[68] Uri Shaham,et al. DeepSurv: personalized treatment recommender system using a Cox proportional hazards deep neural network , 2016, BMC Medical Research Methodology.
[69] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[70] Amit Dhurandhar,et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques , 2019, ArXiv.
[71] Guisong Xia,et al. A Multiple-Instance Densely-Connected ConvNet for Aerial Scene Classification , 2019, IEEE Transactions on Image Processing.
[72] R. Tibshirani,et al. Repeated observation of breast tumor subtypes in independent gene expression data sets , 2003, Proceedings of the National Academy of Sciences of the United States of America.
[73] Boyang Li,et al. NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks , 2019, ArXiv.
[74] Jie Chen,et al. Locally Interpretable Models and Effects based on Supervised Partitioning (LIME-SUP) , 2018, ArXiv.
[75] Hemant Ishwaran,et al. Evaluating Random Forests for Survival Analysis using Prediction Error Curves. , 2012, Journal of statistical software.
[76] Kate Saenko,et al. RISE: Randomized Input Sampling for Explanation of Black-box Models , 2018, BMVC.
[77] Abdul Kudus,et al. Decision Tree for Competing Risks Survival Probability in Breast Cancer Study , 2008 .
[78] Maozhen Li,et al. Explaining the black-box model: A survey of local interpretation methods for deep neural networks , 2021, Neurocomputing.
[79] Jesse Thomason,et al. Interpreting Black Box Models with Statistical Guarantees , 2019, ArXiv.
[80] Theodoros Evgeniou,et al. A comparison of instance-level counterfactual explanation algorithms for behavioral and textual data: SEDC, LIME-C and SHAP-C , 2019, Advances in Data Analysis and Classification.
[81] Federico Rotolo,et al. Empirical extensions of the lasso penalty to reduce the false discovery rate in high‐dimensional Cox regression models , 2016, Statistics in medicine.
[82] Dorit Merhof,et al. Image-based Survival Analysis for Lung Cancer Patients using CNNs. , 2018 .
[83] Imran Kurt,et al. The comparisons of random survival forests and Cox regression analysis with simulation and an application related to breast cancer , 2009, Expert Syst. Appl..
[84] Naimul Mefraz Khan,et al. DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems , 2019, ArXiv.
[85] Ralf Bender,et al. Generating survival times to simulate Cox proportional hazards models , 2005, Statistics in medicine.
[86] D.,et al. Regression Models and Life-Tables , 2022 .
[87] Maia Lesosky,et al. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data , 2017, BMC Medical Research Methodology.
[88] N. Simon,et al. Generalized Sparse Additive Models , 2019, J. Mach. Learn. Res..
[89] Marcel van Gerven,et al. Explainable Deep Learning: A Field Guide for the Uninitiated , 2020, J. Artif. Intell. Res..
[90] Christophe Ambroise,et al. Regularization Methods for Additive Models , 2003, IDA.
[91] Franco Turini,et al. A Survey of Methods for Explaining Black Box Models , 2018, ACM Comput. Surv..
[92] Janis Klaise,et al. Interpretable Counterfactual Explanations Guided by Prototypes , 2019, ECML/PKDD.