暂无分享,去创建一个
Erik Nijkamp | Ying Nian Wu | Song-Chun Zhu | Ruiqi Gao | Bo Pang | Srinivas Vasudevan | Pavel Sountsov | Song-Chun Zhu | Y. Wu | Erik Nijkamp | Srinivas Vasudevan | Pavel Sountsov | Ruiqi Gao | Bo Pang
[1] P. Mazur. On the theory of brownian motion , 1959 .
[2] R. Mazo. On the theory of brownian motion , 1973 .
[3] Geoffrey E. Hinton,et al. A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..
[4] D. Rubin,et al. Inference from Iterative Simulation Using Multiple Sequences , 1992 .
[5] Geoffrey E. Hinton,et al. The "wake-sleep" algorithm for unsupervised neural networks. , 1995, Science.
[6] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[7] Andrew Gelman,et al. General methods for monitoring convergence of iterative simulations , 1998 .
[8] Ronald Rosenfeld,et al. Whole-sentence exponential language models: a vehicle for linguistic-statistical integration , 2001, Comput. Speech Lang..
[9] Fu Jie Huang,et al. A Tutorial on Energy-Based Learning , 2006 .
[10] Zhuowen Tu,et al. Learning Generative Models via Discriminative Approaches , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.
[11] Christophe Andrieu,et al. A tutorial on adaptive MCMC , 2008, Stat. Comput..
[12] Tijmen Tieleman,et al. Training restricted Boltzmann machines using approximations to the likelihood gradient , 2008, ICML '08.
[13] J. M. Sanz-Serna,et al. Optimal tuning of the hybrid Monte Carlo algorithm , 2010, 1001.4460.
[14] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[15] Aapo Hyvärinen,et al. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.
[16] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[17] Radford M. Neal. MCMC Using Hamiltonian Dynamics , 2011, 1206.1901.
[18] Jiquan Ngiam,et al. Learning Deep Energy Models , 2011, ICML.
[19] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[20] Yoshua Bengio,et al. Better Mixing via Deep Representations , 2012, ICML.
[21] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[22] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[23] Max Welling,et al. Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets , 2014, ICML.
[24] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[25] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[26] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[27] Yoshua Bengio,et al. NICE: Non-linear Independent Components Estimation , 2014, ICLR.
[28] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Yann LeCun,et al. Energy-based Generative Adversarial Network , 2016, ICLR.
[30] Yang Lu,et al. A Theory of Generative ConvNet , 2016, ICML.
[31] Sergey Levine,et al. A Connection between Generative Adversarial Networks, Inverse Reinforcement Learning, and Energy-Based Models , 2016, ArXiv.
[32] Yoshua Bengio,et al. Deep Directed Generative Models with Energy-Based Probability Estimation , 2016, ArXiv.
[33] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[34] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[35] Quoc V. Le,et al. Swish: a Self-Gated Activation Function , 2017, 1710.05941.
[36] Zhuowen Tu,et al. Introspective Classification with Convolutional Nets , 2017, NIPS.
[37] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[38] Oren Mangoubi,et al. Rapid Mixing of Hamiltonian Monte Carlo on Strongly Log-Concave Distributions , 2017, 1708.07114.
[39] Tian Han,et al. Alternating Back-Propagation for Generator Network , 2016, AAAI.
[40] Zhuowen Tu,et al. Introspective Neural Networks for Generative Modeling , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[41] Prafulla Dhariwal,et al. Glow: Generative Flow with Invertible 1x1 Convolutions , 2018, NeurIPS.
[42] Bin Wang,et al. Learning Neural Trans-Dimensional Random Field Language Models with Noise-Contrastive Estimation , 2017, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[43] Yang Lu,et al. Learning Generative ConvNets via Multi-grid Modeling and Sampling , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[44] Quoc V. Le,et al. Searching for Activation Functions , 2018, arXiv.
[45] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[46] Yoshua Bengio,et al. Maximum Entropy Generators for Energy-Based Models , 2019, ArXiv.
[47] Joshua V. Dillon,et al. NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport , 2019, 1903.03704.
[48] Igor Mordatch,et al. Implicit Generation and Generalization with Energy Based Models , 2018 .
[49] David Duvenaud,et al. Invertible Residual Networks , 2018, ICML.
[50] David Duvenaud,et al. FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models , 2018, ICLR.
[51] VideoFlow: A Flow-Based Generative Model for Video , 2019, ArXiv.
[52] Kumar Krishna Agrawal,et al. Discrete Flows: Invertible Generative Models of Discrete Data , 2019, DGS@ICLR.
[53] Erik Nijkamp,et al. Building a Telescope to Look Into High-Dimensional Image Spaces , 2018, Quarterly of Applied Mathematics.
[54] Song-Chun Zhu,et al. On Learning Non-Convergent Short-Run MCMC Toward Energy-Based Model , 2019, ArXiv.
[55] S. Levine,et al. VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation , 2019, ICLR.
[56] Andrew M. Dai,et al. Flow Contrastive Estimation of Energy-Based Models , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[57] Aapo Hyvärinen,et al. Variational Autoencoders and Nonlinear ICA: A Unifying Framework , 2019, AISTATS.
[58] Dootika Vats,et al. Revisiting the Gelman–Rubin Diagnostic , 2018, Statistical Science.