Dropout Strikes Back: Improved Uncertainty Estimation via Diversity Sampling
暂无分享,去创建一个
[1] Andreas Nürnberger,et al. The Power of Ensembles for Active Learning in Image Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[2] Adrian E. Roitberg,et al. Less is more: sampling chemical space with active learning , 2018, The Journal of chemical physics.
[3] Linton G. Freeman,et al. Elementary Applied Statistics for students in Behavioral Science , 1965 .
[4] Malik Magdon-Ismail,et al. On selecting a maximum volume sub-matrix of a matrix and related problems , 2009, Theor. Comput. Sci..
[5] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[6] Michael W. Mahoney,et al. Fast Randomized Kernel Ridge Regression with Statistical Guarantees , 2015, NIPS.
[7] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[8] Dmitry Vetrov,et al. Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning , 2020, ICLR.
[9] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[10] Andrew Gordon Wilson,et al. Subspace Inference for Bayesian Deep Learning , 2019, UAI.
[11] Xia Zhu,et al. Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers , 2018, ECCV.
[12] Burr Settles,et al. Active Learning , 2012, Synthesis Lectures on Artificial Intelligence and Machine Learning.
[13] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Andrew Gordon Wilson,et al. A Simple Baseline for Bayesian Uncertainty in Deep Learning , 2019, NeurIPS.
[15] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[16] Ben Taskar,et al. Determinantal Point Processes for Machine Learning , 2012, Found. Trends Mach. Learn..
[17] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[18] Zoubin Ghahramani,et al. Bayesian Active Learning for Classification and Preference Learning , 2011, ArXiv.
[19] O. Macchi. The coincidence approach to stochastic point processes , 1975, Advances in Applied Probability.
[20] Sebastian Nowozin,et al. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.
[21] D. Horvitz,et al. A Generalization of Sampling Without Replacement from a Finite Universe , 1952 .
[22] Shin-ichi Maeda,et al. A Bayesian encourages dropout , 2014, ArXiv.
[23] Klaus C. J. Dietmayer,et al. Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).
[24] Padhraic Smyth,et al. Dropout as a Structured Shrinkage Prior , 2018, ICML.
[25] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[26] Tanmoy Bhattacharya,et al. The need for uncertainty quantification in machine-assisted medical decision making , 2019, Nat. Mach. Intell..
[27] Ryan P. Adams,et al. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.
[28] Jonas Mueller,et al. Maximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles , 2019, AAAI.
[29] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[30] Evgenii Tsymbalov,et al. Dropout-based Active Learning for Regression , 2018, AIST.
[31] Andrew Gordon Wilson,et al. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs , 2018, NeurIPS.
[32] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[33] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[34] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[36] Pierre-Olivier Amblard,et al. Subsampling with K Determinantal Point Processes for Estimating Statistics in Large Data Sets , 2018, 2018 IEEE Statistical Signal Processing Workshop (SSP).
[37] Michal Valko,et al. DPPy: DPP Sampling with Python , 2019, J. Mach. Learn. Res..
[38] Sergey Pavlov,et al. “Zhores” — Petaflops supercomputer for data-driven modeling, machine learning and artificial intelligence installed in Skolkovo Institute of Science and Technology , 2019, Open Engineering.
[39] Yifan Yu,et al. CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison , 2019, AAAI.