Implicit Generation and Modeling with Energy Based Models
暂无分享,去创建一个
[1] D. Dowson,et al. The Fréchet distance between multivariate normal distributions , 1982 .
[2] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[3] Radford M. Neal. Annealed importance sampling , 1998, Stat. Comput..
[4] Geoffrey E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.
[5] Yee Whye Teh,et al. Energy-Based Models for Sparse Overcomplete Representations , 2003, J. Mach. Learn. Res..
[6] Geoffrey E. Hinton,et al. Learning nonlinear constraints with contrastive backpropagation , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..
[7] Fu Jie Huang,et al. A Tutorial on Energy-Based Learning , 2006 .
[8] Yee Whye Teh,et al. Unsupervised Discovery of Nonlinear Structure Using Contrastive Backpropagation , 2006, Cogn. Sci..
[9] Tijmen Tieleman,et al. Training restricted Boltzmann machines using approximations to the likelihood gradient , 2008, ICML '08.
[10] Geoffrey E. Hinton,et al. Deep Boltzmann Machines , 2009, AISTATS.
[11] Radford M. Neal. MCMC Using Hamiltonian Dynamics , 2011, 1206.1901.
[12] Yee Whye Teh,et al. Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.
[13] Alex Graves,et al. Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.
[14] Hugo Larochelle,et al. RNADE: The real-valued neural autoregressive density-estimator , 2013, NIPS.
[15] Iasonas Kokkinos,et al. Describing Textures in the Wild , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[16] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[17] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[18] Ruslan Salakhutdinov,et al. Accurate and conservative estimates of MRF log-likelihood using reverse annealing , 2014, AISTATS.
[19] Yoshua Bengio,et al. NICE: Non-linear Independent Components Estimation , 2014, ICLR.
[20] OctoMiao. Overcoming catastrophic forgetting in neural networks , 2016 .
[21] Yann LeCun,et al. Energy-based Generative Adversarial Network , 2016, ICLR.
[22] Yang Lu,et al. A Theory of Generative ConvNet , 2016, ICML.
[23] Koray Kavukcuoglu,et al. Pixel Recurrent Neural Networks , 2016, ICML.
[24] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[25] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[26] Sergey Levine,et al. A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models , 2016, ArXiv.
[27] Yoshua Bengio,et al. Deep Directed Generative Models with Energy-Based Probability Estimation , 2016, ArXiv.
[28] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[29] Yang Liu,et al. Stein Variational Policy Gradient , 2017, UAI.
[30] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[31] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[32] Ruslan Salakhutdinov,et al. On the Quantitative Analysis of Decoder-Based Generative Models , 2016, ICLR.
[33] Minh N. Do,et al. Semantic Image Inpainting with Deep Generative Models , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Sergey Levine,et al. Reinforcement Learning with Deep Energy-Based Policies , 2017, ICML.
[35] Jonathon Shlens,et al. A Learned Representation For Artistic Style , 2016, ICLR.
[36] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[37] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[38] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[39] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[40] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[41] Yarin Gal,et al. Towards Robust Evaluations of Continual Learning , 2018, ArXiv.
[42] Xiaohua Zhai,et al. The GAN Landscape: Losses, Architectures, Regularization, and Normalization , 2018, ArXiv.
[43] Zhengqi Li,et al. Learning Intrinsic Image Decomposition from Watching the World , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[44] Rémi Munos,et al. Autoregressive Quantile Networks for Generative Modeling , 2018, ICML.
[45] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[46] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[47] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[48] Yen-Cheng Liu,et al. Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines , 2018, ArXiv.
[49] Yoshua Bengio,et al. Maximum Entropy Generators for Energy-Based Models , 2019, ArXiv.
[50] Yee Whye Teh,et al. Do Deep Generative Models Know What They Don't Know? , 2018, ICLR.
[51] Thomas G. Dietterich,et al. Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.
[52] Graham Neubig,et al. Lagging Inference Networks and Posterior Collapse in Variational Autoencoders , 2019, ICLR.
[53] Jakub W. Pachocki,et al. Learning dexterous in-hand manipulation , 2018, Int. J. Robotics Res..
[54] Richard E. Turner. CD notes , 2022 .