Deep Generative Model and Analysis of Cardiac Transmembrane Potential

It has been shown recently that inverse electrophysiological imaging can be improved by using a deep generative model learned in an unsupervised way so that cardiac transmembrane potential and underlying generative models could be simultaneously inferred from the ECG. The prior and conditional distributions learned in such a way are, however, directly affected by the architecture of neural network used in unsupervised learning. In this paper, we investigate the effect of architecture in learning representation and generalizing to new test cases. By comparing reconstruction of three types of sequence autoencoder, we show that different sequence autoencoders might be focusing on different aspects of TMP and might perform differently according to the metric used to measure reconstruction. We also analyze the latent space in different architectures and discuss important questions raised by these observations.