暂无分享,去创建一个
[1] Bhavya Kailkhura,et al. Probabilistic Neighbourhood Component Analysis: Sample Efficient Uncertainty Estimation in Deep Learning , 2020, ArXiv.
[2] Maithra Raghu,et al. A Survey of Deep Learning for Scientific Discovery , 2020, ArXiv.
[3] Bhavya Kailkhura,et al. Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning , 2020, ICML.
[4] Brian Gallagher,et al. Predicting compressive strength of consolidated molecular solids using computer vision and deep learning , 2019, Materials & Design.
[5] Pramod K. Varshney,et al. Anomalous Example Detection in Deep Learning: A Survey , 2020, IEEE Access.
[6] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[7] Balaji Lakshminarayanan,et al. Deep Ensembles: A Loss Landscape Perspective , 2019, ArXiv.
[8] M. Marques,et al. Recent advances and applications of machine learning in solid-state materials science , 2019, npj Computational Materials.
[9] Mojtaba Haghighatlari,et al. Thinking Globally, Acting Locally: On the Issue of Training Set Imbalance and the Case for Local Machine Learning Models in Chemistry , 2019 .
[10] Sebastian Nowozin,et al. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.
[11] Isidro Cortes-Ciriano,et al. Reliable Prediction Errors for Deep Neural Networks Using Test-Time Dropout , 2019, J. Chem. Inf. Model..
[12] Bhavya Kailkhura,et al. Reliable and explainable machine-learning methods for accelerated material discovery , 2019, npj Computational Materials.
[13] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[15] Isidro Cortes-Ciriano,et al. Deep Confidence: A Computationally Efficient Framework for Calculating Reliable Errors for Deep Neural Networks , 2018, Journal of chemical information and modeling.
[16] Stefano Ermon,et al. Accurate Uncertainties for Deep Learning Using Calibrated Regression , 2018, ICML.
[17] Guoyan Zheng,et al. Crowd Counting with Deep Negative Correlation Learning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Erin Antono,et al. Building Data-driven Models with Microstructural Images: Generalization and Interpretability , 2017, ArXiv.
[19] Yue Liu,et al. Materials discovery and design using machine learning , 2017 .
[20] Zenghui Wang,et al. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review , 2017, Neural Computation.
[21] Chiho Kim,et al. Machine learning in materials informatics: recent applications and prospects , 2017, npj Computational Materials.
[22] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[23] Ran El-Yaniv,et al. Selective Classification for Deep Neural Networks , 2017, NIPS.
[24] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[25] Max Welling,et al. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks , 2017, ICML.
[26] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[27] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[28] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[29] Yarin Gal,et al. Uncertainty in Deep Learning , 2016 .
[30] Synho Do,et al. How much data is needed to train a medical image deep learning system to achieve necessary high accuracy , 2015, 1511.06348.
[31] Frank Hutter,et al. Speeding Up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves , 2015, IJCAI.
[32] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[33] Milos Hauskrecht,et al. Obtaining Well Calibrated Probabilities Using Bayesian Binning , 2015, AAAI.
[34] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[35] Senén Barro,et al. Do we need hundreds of classifiers to solve real world classification problems? , 2014, J. Mach. Learn. Res..
[36] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[37] Marc Dymetman,et al. Prediction of Learning Curves in Machine Translation , 2012, ACL.
[38] Ran El-Yaniv,et al. On the Foundations of Noise-free Selective Classification , 2010, J. Mach. Learn. Res..
[39] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[40] Klaus-Robert Müller,et al. Covariate Shift Adaptation by Importance Weighted Cross Validation , 2007, J. Mach. Learn. Res..
[41] A. Raftery,et al. Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .
[42] B. L. Weeks,et al. Changes in Pore Size Distribution upon Thermal Cycling of TATB‐based Explosives Measured by Ultra‐Small Angle X‐Ray Scattering , 2006 .
[43] Rich Caruana,et al. Predicting good probabilities with supervised learning , 2005, ICML.
[44] C. E. SHANNON,et al. A mathematical theory of communication , 1948, MOCO.
[45] Thomas G. Dietterich. Ensemble Methods in Machine Learning , 2000, Multiple Classifier Systems.
[46] M. Zweig,et al. Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine. , 1993, Clinical chemistry.
[47] C. K. Chow,et al. An optimum character recognition system using decision functions , 1957, IRE Trans. Electron. Comput..