Stacked Capsule Autoencoders

Objects are composed of a set of geometrically organized parts. We introduce an unsupervised capsule autoencoder (SCAE), which explicitly uses geometric relationships between parts to reason about objects. Since these relationships do not depend on the viewpoint, our model is robust to viewpoint changes. SCAE consists of two stages. In the first stage, the model predicts presences and poses of part templates directly from the image and tries to reconstruct the image by appropriately arranging the templates. In the second stage, SCAE predicts parameters of a few object capsules, which are then used to reconstruct part poses. Inference in this model is amortized and performed by off-the-shelf neural encoders, unlike in previous capsule networks. We find that object capsule presences are highly informative of the object class, which leads to state-of-the-art results for unsupervised classification on SVHN (55%) and MNIST (98.7%). The code is available at this https URL

[1]  Max Welling,et al.  Group Equivariant Convolutional Networks , 2016, ICML.

[2]  Aapo Hyvärinen,et al.  Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.

[3]  Ingmar Posner,et al.  GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations , 2019, ICLR.

[4]  Xu Ji,et al.  Invariant Information Distillation for Unsupervised Image Segmentation and Clustering , 2018, ArXiv.

[5]  Yoshua Bengio,et al.  Learning deep representations by mutual information estimation and maximization , 2018, ICLR.

[6]  Arnold W. M. Smeulders,et al.  Dynamic Steerable Blocks in Deep Residual Networks , 2017, BMVC.

[7]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[8]  Xiaogang Wang,et al.  Neural Network Encapsulation , 2018, ECCV.

[9]  Wei Zhao,et al.  Fast dynamic routing based on weighted kernel density estimation , 2018, Cognitive Internet of Things.

[10]  Daniel Cremers,et al.  Associative Deep Clustering: Training a Classification Network with No Labels , 2018, GCPR.

[11]  Yee Whye Teh,et al.  Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects , 2018, NeurIPS.

[12]  Thomas Hofmann,et al.  Greedy Layer-Wise Training of Deep Networks , 2007 .

[13]  Masashi Sugiyama,et al.  Learning Discrete Representations via Information Maximizing Self-Augmented Training , 2017, ICML.

[14]  Geoffrey E. Hinton,et al.  Attend, Infer, Repeat: Fast Scene Understanding with Generative Models , 2016, NIPS.

[15]  Koray Kavukcuoglu,et al.  Exploiting Cyclic Symmetry in Convolutional Neural Networks , 2016, ICML.

[16]  Irvin Rock,et al.  Orientation and form , 1974 .

[17]  Tijmen Tieleman,et al.  Optimizing Neural Networks that Generate Iimages , 2014 .

[18]  Geoffrey E. Hinton Some Demonstrations of the Effects of Structural Descriptions in Mental Imagery , 1979, Cogn. Sci..

[19]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[20]  Klaus Greff,et al.  Multi-Object Representation Learning with Iterative Variational Inference , 2019, ICML.

[21]  Yee Whye Teh,et al.  Set Transformer , 2018, ArXiv.

[22]  Premkumar Natarajan,et al.  CapsuleGAN: Generative Adversarial Capsule Network , 2018, ECCV Workshops.

[23]  Qiang Liu,et al.  An Optimization View on Dynamic Routing Between Capsules , 2018, ICLR.

[24]  Yee Whye Teh,et al.  The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.

[25]  Tapani Raiko,et al.  Semi-supervised Learning with Ladder Networks , 2015, NIPS.

[26]  Gideon Kowadlo,et al.  Sparse Unsupervised Capsules Generalize Better , 2018, ArXiv.

[27]  Pascal Libuschewski,et al.  Group Equivariant Capsule Networks , 2018, NeurIPS.

[28]  Stéphane Mallat,et al.  Deep roto-translation scattering for object classification , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Matthew Botvinick,et al.  MONet: Unsupervised Scene Decomposition and Representation , 2019, ArXiv.

[30]  H. Kuhn The Hungarian method for the assignment problem , 1955 .

[31]  Geoffrey E. Hinton,et al.  Transforming Auto-Encoders , 2011, ICANN.

[32]  Geoffrey E. Hinton,et al.  Matrix capsules with EM routing , 2018, ICLR.

[33]  Geoffrey E. Hinton,et al.  Layer Normalization , 2016, ArXiv.

[34]  Andrew Zisserman,et al.  Spatial Transformer Networks , 2015, NIPS.

[35]  Geoffrey E. Hinton,et al.  Dynamic Routing Between Capsules , 2017, NIPS.

[36]  Max Welling,et al.  Steerable CNNs , 2016, ICLR.