暂无分享,去创建一个
Mohammad Norouzi | David Duvenaud | Jorn-Henrik Jacobsen | Kevin Swersky | Will Grathwohl | Kuan-Chieh Wang | D. Duvenaud | J. Jacobsen | Will Grathwohl | Kuan-Chieh Wang | Mohammad Norouzi | Kevin Swersky | Kuan-Chieh Jackson Wang
[1] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[2] Yingzhen Li,et al. Are Generative Classifiers More Robust to Adversarial Attacks? , 2018, ICML.
[3] Richard S. Zemel,et al. Adversarial Distillation of Bayesian Neural Network Posteriors , 2018, ICML.
[4] Yang Song,et al. Generative Modeling by Estimating Gradients of the Data Distribution , 2019, NeurIPS.
[5] Rishi Sharma,et al. A Note on the Inception Score , 2018, ArXiv.
[6] David Duvenaud,et al. Invertible Residual Networks , 2018, ICML.
[7] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[8] Yee Whye Teh,et al. Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality , 2019 .
[9] Logan Engstrom,et al. Synthesizing Robust Adversarial Examples , 2017, ICML.
[10] Zhuowen Tu,et al. Wasserstein Introspective Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[11] Matthias Bethge,et al. Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models , 2017, ICLR.
[12] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[13] Richard Zemel,et al. Conditional Generative Models are not Robust , 2019, ArXiv.
[14] Bernhard Schölkopf,et al. Adversarial Vulnerability of Neural Networks Increases With Input Dimension , 2018, ArXiv.
[15] Zhuowen Tu,et al. Introspective Classification with Convolutional Nets , 2017, NIPS.
[16] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[17] Erik Nijkamp,et al. On Learning Non-Convergent Short-Run MCMC Toward Energy-Based Model , 2019, ArXiv.
[18] David A. Wagner,et al. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples , 2018, ICML.
[19] Igor Mordatch,et al. Implicit Generation and Generalization with Energy Based Models , 2018 .
[20] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[21] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[22] Fu Jie Huang,et al. A Tutorial on Energy-Based Learning , 2006 .
[23] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[24] Matthias Bethge,et al. Towards the first adversarially robust neural network model on MNIST , 2018, ICLR.
[25] Zhijian Ou,et al. Learning Neural Random Fields with Inclusive Auxiliary Generators , 2018, ArXiv.
[26] David Duvenaud,et al. Residual Flows for Invertible Generative Modeling , 2019, NeurIPS.
[27] W. Brendel,et al. Foolbox: A Python toolbox to benchmark the robustness of machine learning models , 2017 .
[28] Xi Chen,et al. PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications , 2017, ICLR.
[29] Aapo Hyvärinen,et al. Estimation of Non-Normalized Statistical Models by Score Matching , 2005, J. Mach. Learn. Res..
[30] Zhijian Ou,et al. Generative Modeling by Inclusive Neural Random Fields with Applications in Image Generation and Anomaly Detection , 2018 .
[31] Greg Yang,et al. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers , 2019, NeurIPS.
[32] Aleksander Madry,et al. Computer Vision with a Single (Robust) Classifier , 2019, NeurIPS 2019.
[33] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[34] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[35] Yee Whye Teh,et al. Do Deep Generative Models Know What They Don't Know? , 2018, ICLR.
[36] Yee Whye Teh,et al. Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality , 2019, ArXiv.
[37] Matthias Bethge,et al. Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models , 2017, ArXiv.
[38] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[39] Xiaojin Zhu,et al. Semi-Supervised Learning , 2010, Encyclopedia of Machine Learning.
[40] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[41] D. Rubin,et al. Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .
[42] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[43] Michael U. Gutmann,et al. Conditional Noise-Contrastive Estimation of Unnormalised Models , 2018, ICML.
[44] Tian Han,et al. On the Anatomy of MCMC-based Maximum Likelihood Learning of Energy-Based Models , 2019, AAAI.
[45] Yang Song,et al. PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples , 2017, ICLR.
[46] Yann LeCun,et al. Regularized estimation of image statistics by Score Matching , 2010, NIPS.
[47] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[48] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[49] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[50] Tijmen Tieleman,et al. Training restricted Boltzmann machines using approximations to the likelihood gradient , 2008, ICML '08.
[51] Yang Lu,et al. A Theory of Generative ConvNet , 2016, ICML.