Uniform Interpolation Constrained Geodesic Learning on Data Manifold

In this paper, we propose a method to learn a minimizing geodesic within a data manifold. Along the learned geodesic, our method can generate high-quality interpolations between two given data samples. Specifically, we use an autoencoder network to map data samples into latent space and perform interpolation via an interpolation network. We add prior geometric information to regularize our autoencoder for the convexity of representations so that for any given interpolation approach, the generated interpolations remain within the distribution of the data manifold. Before the learning of a geodesic, a proper Riemannianmetric should be defined. Therefore, we induce a Riemannian metric by the canonical metric in the Euclidean space which the data manifold is isometrically immersed in. Based on this defined Riemannian metric, we introduce a constant speed loss and a minimizing geodesic loss to regularize the interpolation network to generate uniform interpolation along the learned geodesic on the manifold. We provide a theoretical analysis of our model and use image translation as an example to demonstrate the effectiveness of our method.

[1]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[2]  Pat Langley,et al.  Crafting Papers on Machine Learning , 2000, ICML.

[3]  Charu C. Aggarwal,et al.  On the Surprising Behavior of Distance Metrics in High Dimensional Spaces , 2001, ICDT.

[4]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[5]  P. Thomas Fletcher,et al.  The Riemannian Geometry of Deep Generative Models , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[6]  David Berthelot,et al.  Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer , 2018, ICLR.

[7]  Tim Sainburg,et al.  Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions , 2018, ArXiv.

[8]  Navdeep Jaitly,et al.  Adversarial Autoencoders , 2015, ArXiv.

[9]  Lars Kai Hansen,et al.  Latent Space Oddity: on the Curvature of Deep Generative Models , 2017, ICLR.

[10]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[11]  Xiaogang Wang,et al.  Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[12]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.

[13]  Jiaya Jia,et al.  Homomorphic Latent Space Interpolation for Unpaired Image-To-Image Translation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Søren Hauberg,et al.  Fast and Robust Shortest Paths on Manifolds Learned from Data , 2019, AISTATS.

[15]  Christopher K. I. Williams,et al.  Magnification factors for the SOM and GTM algorithms , 1997 .

[16]  Bernhard Schölkopf,et al.  Wasserstein Auto-Encoders , 2017, ICLR.

[17]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[18]  Hongyuan Zha,et al.  Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent Space Alignment , 2002, ArXiv.

[19]  Xueyan Jiang,et al.  Metrics for Deep Generative Models , 2017, AISTATS.

[20]  Justin Bayer,et al.  Fast Approximate Geodesics for Deep Generative Models , 2018, ICANN.

[21]  Alexander M. Bronstein,et al.  DIMAL: Deep Isometric Manifold Learning Using Sparse Geodesic Sampling , 2017, 2019 IEEE Winter Conference on Applications of Computer Vision (WACV).

[22]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[23]  Luc Van Gool,et al.  Optimal transport maps for distribution preserving operations on latent spaces of Generative Models , 2019, ICLR.

[24]  Dongmei Fu,et al.  Geodesic Clustering in Deep Generative Models , 2018, ArXiv.