暂无分享,去创建一个
[1] Wray L. Buntine,et al. Bayesian Back-Propagation , 1991, Complex Syst..
[2] Marcus Liwicki,et al. Bayesian Convolutional Neural Networks with Variational Inference , 2018, 1806.05978.
[3] Oriol Vinyals,et al. Bayesian Recurrent Neural Networks , 2017, ArXiv.
[4] Geoffrey E. Hinton,et al. A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.
[5] Ron Meir,et al. Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights , 2014, NIPS.
[6] Jürgen Schmidhuber,et al. Simplifying Neural Nets by Discovering Flat Minima , 1994, NIPS.
[7] Yg,et al. Dropout as a Bayesian Approximation : Insights and Applications , 2015 .
[8] Antonio Torralba,et al. Ieee Transactions on Pattern Analysis and Machine Intelligence 1 80 Million Tiny Images: a Large Dataset for Non-parametric Object and Scene Recognition , 2022 .
[9] Huaiyu Zhu. On Information and Sufficiency , 1997 .
[10] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[11] Karl J. Friston,et al. Variational free energy and the Laplace approximation , 2007, NeuroImage.
[12] Zoubin Ghahramani,et al. Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference , 2015, ArXiv.
[13] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[14] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[15] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[16] Erich Elsen,et al. Exploring Sparsity in Recurrent Neural Networks , 2017, ICLR.
[17] Myunghee Cho Paik,et al. Uncertainty quantification using Bayesian neural networks in classification: Application to ischemic stroke lesion segmentation , 2018 .
[18] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[19] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Charles M. Bishop,et al. Ensemble learning in Bayesian neural networks , 1998 .
[21] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[22] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[23] William T. Freeman,et al. Constructing free-energy approximations and generalized belief propagation algorithms , 2005, IEEE Transactions on Information Theory.
[24] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[25] Mark Sandler,et al. The Power of Sparsity in Convolutional Neural Networks , 2017, ArXiv.
[26] D. Mackay,et al. A Practical Bayesian Framework for Backprop Networks , 1991 .
[27] Ariel D. Procaccia,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.
[28] Kumar Shridhar,et al. Bayesian Convolutional Neural Networks , 2018, ArXiv.
[29] Chih-Yuan Yang,et al. Single-Image Super-Resolution: A Benchmark , 2014, ECCV.
[30] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[31] Filip De Turck,et al. Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks , 2016, ArXiv.
[32] Alex Graves,et al. Stochastic Backpropagation through Mixture Density Distributions , 2016, ArXiv.
[33] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[34] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[35] Yann LeCun,et al. Transforming Neural-Net Output Levels to Probability Distributions , 1990, NIPS.
[36] Daniel Rueckert,et al. Cardiac Image Super-Resolution with Global Correspondence Using Multi-Atlas PatchMatch , 2013, MICCAI.
[37] Dmitry Vetrov,et al. Variance Networks: When Expectation Does Not Meet Your Expectations , 2018, ICLR.
[38] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[39] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[40] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[41] Alex Kendall,et al. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.
[42] David Mackay,et al. Probable networks and plausible predictions - a review of practical Bayesian methods for supervised neural networks , 1995 .
[43] Wonyong Sung,et al. Structured Pruning of Deep Convolutional Neural Networks , 2015, ACM J. Emerg. Technol. Comput. Syst..
[44] Daniel Rueckert,et al. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[46] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[47] Ming Yang,et al. Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.
[48] Yücel Altunbasak,et al. Eigenface-domain super-resolution for face recognition , 2003, IEEE Trans. Image Process..
[49] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[50] Jitendra Malik,et al. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.
[51] Dustin Tran,et al. Reliable Uncertainty Estimates in Deep Neural Networks using Noise Contrastive Priors , 2018, ArXiv.
[52] Zachary Chase Lipton,et al. Efficient Exploration for Dialogue Policy Learning with BBQ Networks & Replay Buffer Spiking , 2016 .
[53] A. Kiureghian,et al. Aleatory or epistemic? Does it matter? , 2009 .
[54] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[55] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[56] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[57] Mathieu Salzmann,et al. Learning the Number of Neurons in Deep Networks , 2016, NIPS.
[58] Christopher D. Manning,et al. Fast dropout training , 2013, ICML.
[59] Ivan V. Oseledets,et al. Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition , 2014, ICLR.
[60] Jianfeng Gao,et al. Efficient Exploration for Dialog Policy Learning with Deep BBQ Networks \& Replay Buffer Spiking , 2016, ArXiv.