暂无分享,去创建一个
Jasper Snoek | Dustin Tran | Florian Wenzel | Rodolphe Jenatton | F. Wenzel | Jasper Snoek | Rodolphe Jenatton | Dustin Tran
[1] G. Brier. VERIFICATION OF FORECASTS EXPRESSED IN TERMS OF PROBABILITY , 1950 .
[2] Esther Levin,et al. A statistical approach to learning and generalization in layered neural networks , 1989, Proc. IEEE.
[3] Lawrence D. Jackel,et al. Handwritten Digit Recognition with a Back-Propagation Network , 1989, NIPS.
[4] Pierre Priouret,et al. Adaptive Algorithms and Stochastic Approximations , 1990, Applications of Mathematics.
[5] Elie Bienenstock,et al. Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.
[6] Robert Gibbons,et al. A primer in game theory , 1992 .
[7] Jürgen Schmidhuber,et al. Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks , 1992, Neural Computation.
[8] Geoffrey E. Hinton,et al. Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.
[9] Jürgen Schmidhuber,et al. A ‘Self-Referential’ Weight Matrix , 1993 .
[10] Anders Krogh,et al. Neural Network Ensembles, Cross Validation, and Active Learning , 1994, NIPS.
[11] Geoffrey E. Hinton,et al. Bayesian Learning for Neural Networks , 1995 .
[12] D. Opitz,et al. Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..
[13] Thomas G. Dietterich. Ensemble Methods in Machine Learning , 2000, Multiple Classifier Systems.
[14] Rich Caruana,et al. Ensemble selection from libraries of models , 2004, ICML.
[15] Rich Caruana,et al. Getting the Most Out of Ensemble Selection , 2006, Sixth International Conference on Data Mining (ICDM'06).
[16] Patrice Marcotte,et al. An overview of bilevel optimization , 2007, Ann. Oper. Res..
[17] Charles Kemp,et al. The discovery of structural form , 2008, Proceedings of the National Academy of Sciences.
[18] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[19] Oleksandr Makeyev,et al. Neural network with ensembles , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).
[20] Ryan P. Adams,et al. Learning the Structure of Deep Sparse Graphical Models , 2009, AISTATS.
[21] Yoshua Bengio,et al. Algorithms for Hyper-Parameter Optimization , 2011, NIPS.
[22] Sebastian Thrun,et al. Towards fully autonomous driving: Systems and algorithms , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).
[23] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[24] Yoshua Bengio,et al. Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..
[25] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[26] Joshua B. Tenenbaum,et al. Structure Discovery in Nonparametric Regression through Compositional Kernel Search , 2013, ICML.
[27] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[28] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[29] Julien Cornebise,et al. Weight Uncertainty in Neural Network , 2015, ICML.
[30] Michael Cogswell,et al. Why M Heads are Better than One: Training a Diverse Ensemble of Deep Networks , 2015, ArXiv.
[31] Aaron Klein,et al. Efficient and Robust Automated Machine Learning , 2015, NIPS.
[32] Joshua B. Tenenbaum,et al. Human-level concept learning through probabilistic program induction , 2015, Science.
[33] Milos Hauskrecht,et al. Obtaining Well Calibrated Probabilities Using Bayesian Binning , 2015, AAAI.
[34] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.
[35] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[36] Furong Huang,et al. Escaping From Saddle Points - Online Stochastic Gradient for Tensor Decomposition , 2015, COLT.
[37] Julien Cornebise,et al. Weight Uncertainty in Neural Networks , 2015, ArXiv.
[38] Christian Gagné,et al. Bayesian Hyperparameter Optimization for Ensemble Learning , 2016, UAI.
[39] Li Li,et al. Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records , 2016, Scientific Reports.
[40] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[41] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[43] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[44] Quoc V. Le,et al. HyperNetworks , 2016, ICLR.
[45] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[46] Aaron Klein,et al. Hyperparameter Optimization , 2017, Encyclopedia of Machine Learning and Data Mining.
[47] D. Sculley,et al. Google Vizier: A Service for Black-Box Optimization , 2017, KDD.
[48] Kilian Q. Weinberger,et al. Snapshot Ensembles: Train 1, get M for free , 2017, ICLR.
[49] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[50] F. Hutter,et al. Practical Automated Machine Learning for the AutoML Challenge 2018 , 2018 .
[51] Dustin Tran,et al. Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches , 2018, ICLR.
[52] Aaron C. Courville,et al. FiLM: Visual Reasoning with a General Conditioning Layer , 2017, AAAI.
[53] David Duvenaud,et al. Stochastic Hyperparameter Optimization through Hypernetworks , 2018, ArXiv.
[54] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[55] Theodore Lim,et al. SMASH: One-Shot Model Architecture Search through HyperNetworks , 2017, ICLR.
[56] Mohammad Babaeizadeh,et al. Adjustable Real-time Style Transfer , 2018, DGS@ICLR.
[57] Sebastian Nowozin,et al. Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.
[58] Jeremy Nixon,et al. Measuring Calibration in Deep Learning , 2019, CVPR Workshops.
[59] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[60] Roger B. Grosse,et al. Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions , 2019, ICLR.
[61] Balaji Lakshminarayanan,et al. Deep Ensembles: A Loss Landscape Perspective , 2019, ArXiv.
[62] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[63] Dustin Tran,et al. Bayesian Layers: A Module for Neural Network Uncertainty , 2018, NeurIPS.
[64] Andrew Gordon Wilson,et al. Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning , 2019, ICLR.
[65] Michael W. Dusenberry,et al. Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors , 2020, ICML.
[66] Yee Whye Teh,et al. Neural Ensemble Search for Performant and Calibrated Predictions , 2020, ArXiv.
[67] José Miguel Hernández-Lobato,et al. Depth Uncertainty in Neural Networks , 2020, NeurIPS.
[68] Karsten M. Borgwardt,et al. Large-scale DNA-based phenotypic recording and deep learning enable highly accurate sequence-function mapping , 2020, Nature Communications.
[69] Bastiaan S. Veeling,et al. How Good is the Bayes Posterior in Deep Neural Networks Really? , 2020, ICML.
[70] Alexey Dosovitskiy,et al. You Only Train Once: Loss-Conditional Training of Deep Networks , 2020, ICLR.
[71] Cordelia Schmid,et al. Optimized Generic Feature Learning for Few-shot Classification across Domains , 2020, ArXiv.
[72] Rainer Hofmann-Wellenhof,et al. A deep learning system for differential diagnosis of skin diseases , 2019, Nature Medicine.
[73] Pavel Izmailov,et al. Bayesian Deep Learning and a Probabilistic Perspective of Generalization , 2020, NeurIPS.
[74] Dustin Tran,et al. BatchEnsemble: An Alternative Approach to Efficient Ensemble and Lifelong Learning , 2020, ICLR.
[75] Thomas B. Schön,et al. Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).