Stochastic Normalizations as Bayesian Learning
暂无分享,去创建一个
[1] L. Joseph,et al. Bayesian Statistics: An Introduction , 1989 .
[2] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[3] Boris Ginsburg,et al. Comparison of Batch Normalization and Weight Normalization Algorithms for the Large-scale Image Classification , 2017, ArXiv.
[4] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[5] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[6] Venu Govindaraju,et al. Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks , 2016, ICML.
[7] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[8] Zoubin Ghahramani,et al. A Theoretically Grounded Application of Dropout in Recurrent Neural Networks , 2015, NIPS.
[9] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[10] Pieter Abbeel,et al. Gradient Estimation Using Stochastic Computation Graphs , 2015, NIPS.
[11] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[12] S. Roth,et al. Lightweight Probabilistic Deep Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[13] Dmitry P. Vetrov,et al. Uncertainty Estimation via Stochastic Batch Normalization , 2018, ICLR.
[14] Sepp Hochreiter,et al. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) , 2015, ICLR.
[15] Aleksander Madry,et al. How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift) , 2018, NIPS 2018.
[16] Nathalie Harder,et al. A benchmark for comparison of cell tracking algorithms , 2014, Bioinform..
[17] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[18] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[19] Xiang Li,et al. Understanding the Disharmony Between Dropout and Batch Normalization by Variance Shift , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] David G. Luenberger,et al. Linear and nonlinear programming , 1984 .
[21] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[22] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[23] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Kevin Smith,et al. Bayesian Uncertainty Estimation for Batch Normalized Deep Networks , 2018, ICML.
[25] Boris Flach,et al. Normalization of Neural Networks using Analytic Variance Propagation , 2018, ArXiv.