Interpretable Machine Learning Tools: A Survey
暂无分享,去创建一个
[1] Deepak Venugopal,et al. DDoS Intrusion Detection Through Machine Learning Ensemble , 2019, 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C).
[2] Frederick T. Sheldon,et al. Empirical Evaluation of the Ensemble Framework for Feature Selection in DDoS Attack , 2020, 2020 7th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2020 6th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom).
[3] Daniel W. Apley,et al. Visualizing the effects of predictor variables in black box supervised learning models , 2016, Journal of the Royal Statistical Society: Series B (Statistical Methodology).
[4] Jude W. Shavlik,et al. in Advances in Neural Information Processing , 1996 .
[5] Sajjan G. Shiva,et al. A Holistic Approach for Detecting DDoS Attacks by Using Ensemble Unsupervised Machine Learning , 2020, Advances in Intelligent Systems and Computing.
[6] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[7] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[8] Eric R. Ziegel,et al. The Elements of Statistical Learning , 2003, Technometrics.
[9] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[10] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[11] Monotonic Constraints , 2009, Encyclopedia of Database Systems.
[12] Emil Pitkin,et al. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.
[13] Jie Chen,et al. Explainable Neural Networks based on Additive Index Models , 2018, ArXiv.
[14] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[15] Amit Dhurandhar,et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.
[16] Cynthia Rudin,et al. Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective , 2018 .
[17] Trevor Hastie,et al. Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.
[18] Rich Caruana,et al. InterpretML: A Unified Framework for Machine Learning Interpretability , 2019, ArXiv.
[19] Sajjan G. Shiva,et al. A Stealth Migration Approach to Moving Target Defense in Cloud Computing , 2019 .
[20] Bernd Bischl,et al. iml: An R package for Interpretable Machine Learning , 2018, J. Open Source Softw..
[21] Tie-Yan Liu,et al. LightGBM: A Highly Efficient Gradient Boosting Decision Tree , 2017, NIPS.
[22] Przemyslaw Biecek,et al. DALEX: explainers for complex predictive models , 2018, J. Mach. Learn. Res..
[23] Cynthia Rudin,et al. Supersparse linear integer models for optimized medical scoring systems , 2015, Machine Learning.
[24] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[25] Amit Dhurandhar,et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques , 2019, ArXiv.
[26] Saikat Das,et al. CoRuM: Collaborative Runtime Monitor Framework for Application Security , 2018, 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion).
[27] Brandon M. Greenwell. pdp: An R Package for Constructing Partial Dependence Plots , 2017, R J..