暂无分享,去创建一个
Tong Che | Bo Li | Dongsheng Li | Ziwei Liu | Kaiyang Zhou | Yezhen Wang | Tong Che | Ziwei Liu | Bo Li | Kaiyang Zhou | Yezhen Wang | Dongsheng Li
[1] Mohammad Norouzi,et al. No MCMC for me: Amortized sampling for fast and stable training of energy-based models , 2021, ICLR.
[2] Hongxia Jin,et al. Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Charles J. Geyer,et al. Practical Markov Chain Monte Carlo , 1992 .
[4] Tao Xiang,et al. Domain Generalization in Vision: A Survey , 2021 .
[5] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[6] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[7] Yee Whye Teh,et al. Do Deep Generative Models Know What They Don't Know? , 2018, ICLR.
[8] Gopinath Chennupati,et al. On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks , 2019, NeurIPS.
[9] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[10] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Fu Jie Huang,et al. A Tutorial on Energy-Based Learning , 2006 .
[12] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[13] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[14] Erik Nijkamp,et al. Learning Non-Convergent Non-Persistent Short-Run MCMC Toward Energy-Based Model , 2019, NeurIPS.
[15] Ying Nian Wu,et al. Learning Energy-Based Models by Diffusion Recovery Likelihood , 2020, ICLR.
[16] Uri Shalit,et al. On Calibration and Out-of-domain Generalization , 2021, NeurIPS.
[17] Terrance E. Boult,et al. Towards Open Set Deep Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Kibok Lee,et al. Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples , 2017, ICLR.
[19] Mohammad Norouzi,et al. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One , 2019, ICLR.
[20] Charles Blundell,et al. Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.
[21] Arthur Gretton,et al. Generalized Energy Based Models , 2020, ICLR.
[22] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[23] Yoshua Bengio,et al. Better Mixing via Deep Representations , 2012, ICML.
[24] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[25] Yoshua Bengio,et al. Deep Directed Generative Models with Energy-Based Probability Estimation , 2016, ArXiv.
[26] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Maya R. Gupta,et al. To Trust Or Not To Trust A Classifier , 2018, NeurIPS.
[28] Tomas Pfister,et al. Distance-Based Learning from Errors for Confidence Calibration , 2020, ICLR.
[29] Geoffrey E. Hinton,et al. When Does Label Smoothing Help? , 2019, NeurIPS.
[30] Jason Yosinski,et al. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] C. Geyer. Markov Chain Monte Carlo Maximum Likelihood , 1991 .
[32] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[33] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[34] Yoshua Bengio,et al. Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling , 2020, NeurIPS.
[35] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[36] Milos Hauskrecht,et al. Obtaining Well Calibrated Probabilities Using Bayesian Binning , 2015, AAAI.
[37] Igor Mordatch,et al. Implicit Generation and Modeling with Energy Based Models , 2019, NeurIPS.
[38] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[39] Jasper Snoek,et al. Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks , 2020, ArXiv.
[40] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).