On Variational Bounds of Mutual Information
暂无分享,去创建一个
Alexander A. Alemi | George Tucker | Aaron van den Oord | Aäron van den Oord | Sherjil Ozair | Ben Poole | Ben Poole | G. Tucker | Alexander A. Alemi | Sherjil Ozair
[1] S. Varadhan,et al. Asymptotic evaluation of certain Markov process expectations for large time , 1975 .
[2] Terrence J. Sejnowski,et al. An Information-Maximization Approach to Blind Separation and Blind Deconvolution , 1995, Neural Computation.
[3] Naftali Tishby,et al. The information bottleneck method , 2000, ArXiv.
[4] Liam Paninski,et al. Estimation of Entropy and Mutual Information , 2003, Neural Computation.
[5] David Barber,et al. The IM algorithm: a variational approach to Information Maximization , 2003, NIPS 2003.
[6] William Bialek,et al. Entropy and information in neural spike trains: progress on the sampling problem. , 2003, Physical review. E, Statistical, nonlinear, and soft matter physics.
[7] A. Kraskov,et al. Estimating mutual information. , 2003, Physical review. E, Statistical, nonlinear, and soft matter physics.
[8] Andreas Krause,et al. Discriminative Clustering by Regularized Information Maximization , 2010, NIPS.
[9] Martin J. Wainwright,et al. Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization , 2008, IEEE Transactions on Information Theory.
[10] Michael Mitzenmacher,et al. Detecting Novel Associations in Large Data Sets , 2011, Science.
[11] Yee Whye Teh,et al. A fast and simple algorithm for training neural probabilistic language models , 2012, ICML.
[12] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[13] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[14] Thomas Brox,et al. Discriminative Unsupervised Feature Learning with Convolutional Neural Networks , 2014, NIPS.
[15] Naftali Tishby,et al. Deep learning and the information bottleneck principle , 2015, 2015 IEEE Information Theory Workshop (ITW).
[16] Aram Galstyan,et al. Efficient Estimation of Mutual Information for Strongly Dependent Variables , 2014, AISTATS.
[17] Michael J. Berry,et al. Predictive information in a sensory population , 2013, Proceedings of the National Academy of Sciences.
[18] Alexei A. Efros,et al. Unsupervised Visual Representation Learning by Context Prediction , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[19] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[20] Paolo Favaro,et al. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles , 2016, ECCV.
[21] Charles Blundell,et al. Early Visual Concept Learning with Unsupervised Deep Learning , 2016, ArXiv.
[22] David M. Blei,et al. Variational Inference: A Review for Statisticians , 2016, ArXiv.
[23] Anthony N. Pettitt,et al. A Review of Modern Computational Algorithms for Bayesian Optimal Design , 2016 .
[24] Jascha Sohl-Dickstein,et al. Improved generator objectives for GANs , 2016, ArXiv.
[25] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[26] Sebastian Nowozin,et al. Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks , 2017, ICML.
[27] Alexander A. Alemi,et al. Deep Variational Information Bottleneck , 2017, ICLR.
[28] Masashi Sugiyama,et al. Learning Discrete Representations via Information Maximizing Self-Augmented Training , 2017, ICML.
[29] Andriy Mnih,et al. Disentangling by Factorising , 2018, ICML.
[30] David D. Cox,et al. On the information bottleneck theory of deep learning , 2018, ICLR.
[31] Abhishek Kumar,et al. Variational Inference of Disentangled Latent Concepts from Unlabeled Observations , 2017, ICLR.
[32] Guillaume Desjardins,et al. Understanding disentangling in β-VAE , 2018, ArXiv.
[33] David Pfau,et al. Towards a Definition of Disentangled Representations , 2018, ArXiv.
[34] Max Welling,et al. VAE with a VampPrior , 2017, AISTATS.
[35] Nicolas Macris,et al. Entropy and mutual information in models of deep neural networks , 2018, NeurIPS.
[36] Roger B. Grosse,et al. Isolating Sources of Disentanglement in Variational Autoencoders , 2018, NeurIPS.
[37] Alexander A. Alemi,et al. Fixing a Broken ELBO , 2017, ICML.
[38] Guillaume Desjardins,et al. Understanding disentangling in $\beta$-VAE , 2018, 1804.03599.
[39] Zhuang Ma,et al. Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency , 2018, EMNLP.
[40] Rob Brekelmans,et al. Invariant Representations without Adversarial Training , 2018, NeurIPS.
[41] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[42] Yoshua Bengio,et al. Mutual Information Neural Estimation , 2018, ICML.
[43] Noah D. Goodman,et al. Variational Optimal Experiment Design : Efficient Automation of Adaptive Experiments , 2018 .
[44] David Pfau,et al. Minimally Redundant Laplacian Eigenmaps , 2018, ICLR.
[45] Hongseok Yang,et al. On Nesting Monte Carlo Estimators , 2017, ICML.
[46] Sergey Levine,et al. Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow , 2018, ICLR.
[47] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[48] David H. Wolpert,et al. Nonlinear Information Bottleneck , 2017, Entropy.
[49] Yee Whye Teh,et al. Disentangling Disentanglement in Variational Autoencoders , 2018, ICML.
[50] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[51] Karl Stratos,et al. Formal Limitations on the Measurement of Mutual Information , 2018, AISTATS.