Can VAEs capture topological properties

To what extent can Variational Autoencoders (VAEs) identify semantically meaningful latent variables? Can they at least capture the correct topology if ground-truth latent variables are known? To investigate these questions, we introduce the Diffusion VAE, which allows for arbitrary (closed) manifolds in latent space. A Diffusion VAE uses transition kernels of Brownian motion on the manifold. In particular, it uses properties of the Brownian motion to implement the reparametrization trick and fast approximations to the KL divergence. We show that the Diffusion Variational Autoencoder is indeed capable of capturing topological properties.

[1]  M. Berger,et al.  Le Spectre d'une Variete Riemannienne , 1971 .

[2]  E. JØrgensen,et al.  The central limit problem for geodesic random walks , 1975 .

[3]  市原 完治 Brownian motion on a Riemannian manifold , 1981 .

[4]  Ruslan Salakhutdinov,et al.  On the quantitative analysis of deep belief networks , 2008, ICML '08.

[5]  Daan Wierstra,et al.  Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.

[6]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[7]  Neil D. Lawrence,et al.  Metrics for Probabilistic Geometries , 2014, UAI.

[8]  Shakir Mohamed,et al.  Variational Inference with Normalizing Flows , 2015, ICML.

[9]  Max Welling,et al.  Markov Chain Monte Carlo and Variational Inference: Bridging the Gap , 2014, ICML.

[10]  Jianxiong Xiao,et al.  3D ShapeNets: A deep representation for volumetric shapes , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Murray Shanahan,et al.  Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders , 2016, ArXiv.

[12]  Ruslan Salakhutdinov,et al.  Importance Weighted Autoencoders , 2015, ICLR.

[13]  Shakir Mohamed,et al.  Normalizing Flows on Riemannian Manifolds , 2016, ArXiv.

[14]  Max Welling,et al.  Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.

[15]  Christopher Burgess,et al.  beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.

[16]  Andriy Mnih,et al.  Disentangling by Factorising , 2018, ICML.

[17]  Abhishek Kumar,et al.  Variational Inference of Disentangled Latent Concepts from Unlabeled Observations , 2017, ICLR.

[18]  Guillaume Desjardins,et al.  Understanding disentangling in β-VAE , 2018, ArXiv.

[19]  Luca Falorsi,et al.  Topological Constraints on Homeomorphic Auto-Encoding , 2018, ArXiv.

[20]  David Pfau,et al.  Towards a Definition of Disentangled Representations , 2018, ArXiv.

[21]  Max Welling,et al.  VAE with a VampPrior , 2017, AISTATS.

[22]  Nicola De Cao,et al.  Hyperspherical Variational Auto-Encoders , 2018, UAI 2018.

[23]  Nicola De Cao,et al.  Explorations in Homeomorphic Variational Auto-Encoding , 2018, ArXiv.

[24]  Roger B. Grosse,et al.  Isolating Sources of Disentanglement in Variational Autoencoders , 2018, NeurIPS.

[25]  Richard S. Zemel,et al.  Learning Latent Subspaces in Variational Autoencoders , 2018, NeurIPS.

[26]  Chang Liu,et al.  Riemannian Stein Variational Gradient Descent for Bayesian Inference , 2017, AAAI.

[27]  Lars Kai Hansen,et al.  Latent Space Oddity: on the Curvature of Deep Generative Models , 2017, ICLR.

[28]  Shakir Mohamed,et al.  Implicit Reparameterization Gradients , 2018, NeurIPS.

[29]  Vlado Menkovski,et al.  Diffusion Variational Autoencoders , 2019, IJCAI.