When doing representation learning on data that lives on a known non-trivial manifold embedded in high dimensional space, it is natural to desire the encoder to be homeomorphic when restricted to the manifold, so that it is bijective and continuous with a continuous inverse. Using topological arguments, we show that when the manifold is non-trivial, the encoder must be globally discontinuous and propose a universal, albeit impractical, construction. In addition, we derive necessary constraints which need to be satisfied when designing manifold-specific practical encoders. These are used to analyse candidates for a homeomorphic encoder for the manifold of 3D rotations $SO(3)$.
[1]
Nicola De Cao,et al.
Explorations in Homeomorphic Variational Auto-Encoding
,
2018,
ArXiv.
[2]
Nicola De Cao,et al.
Hyperspherical Variational Auto-Encoders
,
2018,
UAI 2018.
[3]
Kurt Hornik,et al.
Approximation capabilities of multilayer feedforward networks
,
1991,
Neural Networks.
[4]
Koray Kavukcuoglu,et al.
Neural scene representation and rendering
,
2018,
Science.
[5]
Masahisa Adachi,et al.
Embeddings and immersions
,
1993
.
[6]
Paul R. Milgrom,et al.
Envelope Theorems for Arbitrary Choice Sets
,
2002
.