暂无分享,去创建一个
[1] Rich Caruana,et al. InterpretML: A Unified Framework for Machine Learning Interpretability , 2019, ArXiv.
[2] R. Pace,et al. Sparse spatial autoregressions , 1997 .
[3] M. Saeed. Multiparameter Intelligent Monitoring in Intensive Care II ( MIMIC-II ) : A public-access intensive care unit database , 2011 .
[4] T. H. Kyaw,et al. Multiparameter Intelligent Monitoring in Intensive Care II: A public-access intensive care unit database* , 2011, Critical care medicine.
[5] Christopher T. Lowenkamp,et al. False Positives, False Negatives, and False Analyses: A Rejoinder to "Machine Bias: There's Software Used across the Country to Predict Future Criminals. and It's Biased against Blacks" , 2016 .
[6] William J. E. Potts,et al. Generalized additive neural networks , 1999, KDD '99.
[7] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[8] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[9] A. Krizhevsky. Convolutional Deep Belief Networks on CIFAR-10 , 2010 .
[10] R. Tibshirani,et al. Generalized Additive Models , 1986 .
[11] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[12] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[13] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[14] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] D. Sculley,et al. Google Vizier: A Service for Black-Box Optimization , 2017, KDD.
[17] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[18] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[19] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[20] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[21] Gianluca Bontempi,et al. Adaptive Machine Learning for Credit Card Fraud Detection , 2015 .
[22] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[23] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[24] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[25] Trevor Hastie,et al. Generalized linear and generalized additive models in studies of species distributions: setting the scene , 2002 .
[26] Hany Farid,et al. The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.
[27] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[28] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[29] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[30] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[31] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[32] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[33] Yoshua Bengio,et al. On the Spectral Bias of Neural Networks , 2018, ICML.