暂无分享,去创建一个
Dustin Tran | James Davidson | Danijar Hafner | Alex Irpan | Timothy P. Lillicrap | T. Lillicrap | A. Irpan | Danijar Hafner | James Davidson | Dustin Tran
[1] E. Jaynes. Information Theory and Statistical Mechanics , 1957 .
[2] Lawrence D. Jackel,et al. Large Automatic Learning, Rule Extraction, and Generalization , 1987, Complex Syst..
[3] Kiyotoshi Matsuoka,et al. Noise injection into inputs in back-propagation learning , 1992, IEEE Trans. Syst. Man Cybern..
[4] David J. C. MacKay,et al. Information-Based Objective Functions for Active Data Selection , 1992, Neural Computation.
[5] H. Sebastian Seung,et al. Query by committee , 1992, COLT '92.
[6] David J. C. MacKay,et al. A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.
[7] William A. Gale,et al. A sequential algorithm for training text classifiers , 1994, SIGIR '94.
[8] Christopher M. Bishop,et al. Current address: Microsoft Research, , 2022 .
[9] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[10] Kah Kay Sung,et al. Learning and example selection for object and pattern detection , 1995 .
[11] Radford M. Neal. Priors for Infinite Networks , 1996 .
[12] Guozhong An,et al. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.
[13] Andrew McCallum,et al. Employing EM and Pool-Based Active Learning for Text Classification , 1998, ICML.
[14] Takeo Kanade,et al. Neural Network-Based Face Detection , 1998, IEEE Trans. Pattern Anal. Mach. Intell..
[15] Peter Müller,et al. Issues in Bayesian Analysis of Neural Network Models , 1998, Neural Computation.
[16] Daphne Koller,et al. Support Vector Machine Active Learning with Applications to Text Classification , 2000, J. Mach. Learn. Res..
[17] Andrew McCallum,et al. Toward Optimal Active Learning through Monte Carlo Estimation of Error Reduction , 2001, ICML 2001.
[18] Jouko Lampinen,et al. Bayesian approach for neural networks--review and case studies , 2001, Neural Networks.
[19] H. Sebastian Seung,et al. Selective Sampling Using the Query by Committee Algorithm , 1997, Machine Learning.
[20] Sanjoy Dasgupta,et al. Analysis of a greedy active learning strategy , 2004, NIPS.
[21] Daniel Lee,et al. Beyond Gaussian Processes: On the Distributions of Infinite Networks , 2005, NIPS.
[22] Carl E. Rasmussen,et al. A Unifying View of Sparse Approximate Gaussian Process Regression , 2005, J. Mach. Learn. Res..
[23] Trevor Darrell,et al. Active Learning with Gaussian Processes for Object Categorization , 2007, 2007 IEEE 11th International Conference on Computer Vision.
[24] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[25] Iain Murray,et al. Introduction to Gaussian Processes , 2008 .
[26] Nikolaos Papanikolopoulos,et al. Multi-class active learning for image classification , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[27] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[28] Aníbal R. Figueiras-Vidal,et al. Marginalized Neural Network Mixtures for Large-Scale Regression , 2010, IEEE Transactions on Neural Networks.
[29] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[30] Alex Graves,et al. Practical Variational Inference for Neural Networks , 2011, NIPS.
[31] Zoubin Ghahramani,et al. Bayesian Active Learning for Classification and Preference Learning , 2011, ArXiv.
[32] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[33] Xin Li,et al. Adaptive Active Learning for Image Classification , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[34] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[35] Koray Kavukcuoglu,et al. Learning word embeddings efficiently with noise-contrastive estimation , 2013, NIPS.
[36] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[37] Neil D. Lawrence,et al. Gaussian Processes for Big Data , 2013, UAI.
[38] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[39] Quoc V. Le,et al. Sequence to Sequence Learning with Neural Networks , 2014, NIPS.
[40] David M. Blei,et al. Build, Compute, Critique, Repeat: Data Analysis with Latent Variable Models , 2014 .
[41] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[42] Ryan P. Adams,et al. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks , 2015, ICML.
[43] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[44] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[45] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[46] Suchi Saria,et al. A Framework for Individualizing Predictions of Disease Trajectories by Exploiting Multi-Resolution Structure , 2015, NIPS.
[47] Marc Peter Deisenroth,et al. Distributed Gaussian Processes , 2015, ICML.
[48] Yee Whye Teh,et al. Mondrian Forests for Large-Scale Regression when Uncertainty Matters , 2015, AISTATS.
[49] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[50] Max Welling,et al. Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors , 2016, ICML.
[51] Dustin Tran,et al. Edward: A library for probabilistic modeling, inference, and criticism , 2016, ArXiv.
[52] Heiga Zen,et al. WaveNet: A Generative Model for Raw Audio , 2016, SSW.
[53] Carl E. Rasmussen,et al. Manifold Gaussian Processes for regression , 2014, 2016 International Joint Conference on Neural Networks (IJCNN).
[54] Dustin Tran,et al. Automatic Differentiation Variational Inference , 2016, J. Mach. Learn. Res..
[55] R. Srikant,et al. Principled Detection of Out-of-Distribution Examples in Neural Networks , 2017, ArXiv.
[56] Daniel Flam-Shepherd. Mapping Gaussian Process Priors to Bayesian Neural Networks , 2017 .
[57] Geoffrey E. Hinton,et al. Regularizing Neural Networks by Penalizing Confident Output Distributions , 2017, ICLR.
[58] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[59] Max Welling,et al. Bayesian Compression for Deep Learning , 2017, NIPS.
[60] José Miguel Hernández-Lobato,et al. Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables , 2017, 1706.08495.
[61] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[62] Ruimao Zhang,et al. Cost-Effective Active Learning for Deep Image Classification , 2017, IEEE Transactions on Circuits and Systems for Video Technology.
[63] Zoubin Ghahramani,et al. Deep Bayesian Active Learning with Image Data , 2017, ICML.
[64] Kibok Lee,et al. Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.
[65] Jaehoon Lee,et al. Deep Neural Networks as Gaussian Processes , 2017, ICLR.
[66] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[67] Guodong Zhang,et al. Noisy Natural Gradient as Variational Inference , 2017, ICML.
[68] Dustin Tran,et al. Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches , 2018, ICLR.
[69] Mark J. F. Gales,et al. Predictive Uncertainty Estimation via Prior Networks , 2018, NeurIPS.
[70] Richard E. Turner,et al. Gaussian Process Behaviour in Wide Deep Neural Networks , 2018, ICLR.
[71] Soumya Ghosh,et al. Model Selection in Bayesian Neural Networks via Horseshoe Priors , 2017, J. Mach. Learn. Res..