暂无分享,去创建一个
[1] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[2] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] W. Pirie. Spearman Rank Correlation Coefficient , 2006 .
[4] Richard Socher,et al. Efficient and Robust Question Answering from Minimal Context over Documents , 2018, ACL.
[5] Richard S. Zemel,et al. Understanding the Origins of Bias in Word Embeddings , 2018, ICML.
[6] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[7] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[8] Omar ElHarrouss,et al. Image Inpainting: A Review , 2019, Neural Processing Letters.
[9] Arvid Lundervold,et al. An overview of deep learning in medical imaging focusing on MRI , 2018, Zeitschrift fur medizinische Physik.
[10] Christian Tjandraatmadja,et al. Bounding and Counting Linear Regions of Deep Neural Networks , 2017, ICML.
[11] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[12] C. Spearman. The proof and measurement of association between two things. , 2015, International journal of epidemiology.
[13] S. Williams,et al. Pearson's correlation coefficient. , 1996, The New Zealand medical journal.
[14] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[15] Kenji Suzuki,et al. Overview of deep learning in medical imaging , 2017, Radiological Physics and Technology.
[16] Fabrizio Sebastiani,et al. Machine learning in automated text categorization , 2001, CSUR.
[17] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[18] Kunle Olukotun,et al. DAWNBench : An End-to-End Deep Learning Benchmark and Competition , 2017 .
[19] Pradeep Ravikumar,et al. Representer Point Selection for Explaining Deep Neural Networks , 2018, NeurIPS.
[20] Xiaojun Wan,et al. Abstractive Document Summarization with a Graph-Based Attentional Neural Model , 2017, ACL.
[21] Percy Liang,et al. Stronger data poisoning attacks break data sanitization defenses , 2018, Machine Learning.
[22] Michael I. Jordan,et al. A Swiss Army Infinitesimal Jackknife , 2018, AISTATS.
[23] Suchi Saria,et al. Can You Trust This Prediction? Auditing Pointwise Reliability After Learning , 2019, AISTATS.
[24] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[25] Percy Liang,et al. On the Accuracy of Influence Functions for Measuring Group Effects , 2019, NeurIPS.
[26] David S. Melnick,et al. International evaluation of an AI system for breast cancer screening , 2020, Nature.
[27] Frederick Liu,et al. Estimating Training Data Influence by Tracking Gradient Descent , 2020, NeurIPS.
[28] Léon Bottou,et al. Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.
[29] P. Mykland,et al. Nonlinear Experiments: Optimal Design and Inference Based on Likelihood , 1993 .
[30] S. Weisberg,et al. Characterizations of an Empirical Influence Function for Detecting Influential Cases in Regression , 1980 .
[31] Wei-Yang Lin,et al. Machine Learning in Financial Crisis Prediction: A Survey , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[32] Naman Agarwal,et al. Second Order Stochastic Optimization in Linear Time , 2016, ArXiv.
[33] Yan Hao,et al. Image Segmentation Algorithms Overview , 2017, ArXiv.
[34] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[35] S. Feizi,et al. Second-Order Group Influence Functions for Black-Box Predictions , 2019, ArXiv.
[36] Leslie N. Smith,et al. A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size, momentum, and weight decay , 2018, ArXiv.
[37] Haipeng Shen,et al. Artificial intelligence in healthcare: past, present and future , 2017, Stroke and Vascular Neurology.
[38] Klaus-Robert Müller,et al. Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models , 2017, ArXiv.
[39] Sebastian Ruder,et al. An overview of gradient descent optimization algorithms , 2016, Vestnik komp'iuternykh i informatsionnykh tekhnologii.
[40] Michael I. Jordan,et al. A Higher-Order Swiss Army Infinitesimal Jackknife , 2019, ArXiv.
[41] Richard Szeliski,et al. Computer Vision - Algorithms and Applications , 2011, Texts in Computer Science.
[42] J. Shewchuk. An Introduction to the Conjugate Gradient Method Without the Agonizing Pain , 1994 .
[43] S. Weisberg,et al. Residuals and Influence in Regression , 1982 .
[44] Xindong Wu,et al. Object Detection With Deep Learning: A Review , 2018, IEEE Transactions on Neural Networks and Learning Systems.
[45] Abubakar Abid,et al. Interpretation of Neural Networks is Fragile , 2017, AAAI.
[46] Barak A. Pearlmutter. Fast Exact Multiplication by the Hessian , 1994, Neural Computation.
[47] Yoav Goldberg,et al. Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness? , 2020, ACL.