Variational autoencoders (VAE) learn probabilistic latent variable models by optimizing a bound on the marginal likelihood of the observed data. Beyond providing a good density model a VAE model assigns to each data instance a latent code. In many applications, this latent code provides a useful high-level summary of the observation. However, the VAE may fail to learn a useful representation when the decoder family is very expressive. This is because maximum likelihood does not explicitly encourage useful representations and the latent variable is used only if it helps model the marginal distribution. This makes representation learning with VAEs unreliable. To address this issue, we propose a method for explicitly controlling the amount of information stored in the latent code. Our method can learn codes ranging from independent to nearly deterministic while benefiting from decoder capacity. Thus, we decouple the choice of decoder capacity and the latent code dimensionality from the amount of information stored in the code.
[1]
Pieter Abbeel,et al.
Variational Lossy Autoencoder
,
2016,
ICLR.
[2]
Max Welling,et al.
Auto-Encoding Variational Bayes
,
2013,
ICLR.
[3]
Ole Winther,et al.
How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks
,
2016,
ICML 2016.
[4]
Samy Bengio,et al.
Generating Sentences from a Continuous Space
,
2015,
CoNLL.
[5]
David Barber,et al.
The IM algorithm: a variational approach to Information Maximization
,
2003,
NIPS 2003.