Variational Capsule Encoder

We propose a novel capsule network based variational encoder architecture, called Bayesian capsules (B-Caps), to modulate the mean and standard deviation of the sampling distribution in the latent space. We hypothesized that this approach can learn a better representation of features in the latent space than traditional approaches. Our hypothesis was tested by using the learned latent variables for image reconstruction task, where for MNIST and Fashion-MNIST datasets, different classes were separated successfully in the latent space using our proposed model. Our experimental results have shown improved reconstruction and classification performances for both datasets adding credence to our hypothesis. We also showed that by increasing the latent space dimension, the proposed B-Caps was able to learn a better representation when compared to the traditional variational auto-encoders (VAE). Hence our results indicate the strength of capsule networks in representation learning which has never been examined under the VAE settings before.

[1]  Roland Vollgraf,et al.  Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.

[2]  Dana H. Ballard,et al.  Modular Learning in Neural Networks , 1987, AAAI.

[3]  Gunhee Kim,et al.  Self-Routing Capsule Networks , 2019, NeurIPS.

[4]  Taku Komura,et al.  A Recurrent Variational Autoencoder for Human Motion Synthesis , 2017, BMVC.

[5]  Michael Carbin,et al.  The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.

[6]  Quan Pan,et al.  Disentangled Variational Auto-Encoder for Semi-supervised Learning , 2017, Inf. Sci..

[7]  Junwei Han,et al.  Learning Compact and Discriminative Stacked Autoencoder for Hyperspectral Image Classification , 2019, IEEE Transactions on Geoscience and Remote Sensing.

[8]  Stefanos Kollias,et al.  Capsule Routing via Variational Bayes , 2019, AAAI.

[9]  Michael Carbin,et al.  The Lottery Ticket Hypothesis: Training Pruned Neural Networks , 2018, ArXiv.

[10]  Ole Winther,et al.  Ladder Variational Autoencoders , 2016, NIPS.

[11]  Pieter Abbeel,et al.  Variational Lossy Autoencoder , 2016, ICLR.

[12]  Mubarak Shah,et al.  VideoCapsuleNet: A Simplified Network for Action Detection , 2018, NeurIPS.

[13]  Ping Luo,et al.  Towards Understanding Regularization in Batch Normalization , 2018, ICLR.

[14]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[15]  Christopher K. I. Williams,et al.  The shape variational autoencoder: A deep generative model of part‐segmented 3D objects , 2017, Comput. Graph. Forum.

[16]  Diederik P. Kingma,et al.  An Introduction to Variational Autoencoders , 2019, Found. Trends Mach. Learn..

[17]  Ulas Bagci,et al.  Capsules for Object Segmentation , 2018, ArXiv.

[18]  Rob Brekelmans,et al.  Invariant Representations without Adversarial Training , 2018, NeurIPS.

[19]  Matt J. Kusner,et al.  Grammar Variational Autoencoder , 2017, ICML.

[20]  Huaiyu Zhu On Information and Sufficiency , 1997 .

[21]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[22]  Geoffrey E. Hinton,et al.  Dynamic Routing Between Capsules , 2017, NIPS.

[23]  Geoffrey E. Hinton,et al.  Matrix capsules with EM routing , 2018, ICLR.

[24]  Zhen Zhao,et al.  Capsule Networks with Max-Min Normalization , 2019, ArXiv.

[25]  D. Torigian,et al.  Encoding Visual Attributes in Capsules for Explainable Medical Diagnoses , 2019, MICCAI.

[26]  Min Yang,et al.  Investigating Capsule Networks with Dynamic Routing for Text Classification , 2018, EMNLP.

[27]  L. Deng,et al.  The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web] , 2012, IEEE Signal Processing Magazine.

[28]  Zhe Gan,et al.  Variational Autoencoder for Deep Learning of Images, Labels and Captions , 2016, NIPS.

[29]  D. Torigian,et al.  Encoding High-Level Visual Attributes in Capsules for Explainable Medical Diagnoses , 2019, ArXiv.

[30]  Konstantinos N. Plataniotis,et al.  Brain Tumor Type Classification via Capsule Networks , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).

[31]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.