3-Level Residual Capsule Network for Complex Datasets

The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Residual Capsule Network [15] has put the Residual Network and Capsule Network together. Though it did well on simple dataset such as MNIST, the architecture can be improved to do better on complex datasets like CIFAR-10. This brings us to the idea of 3-Level Residual Capsule which not only decreases the number of parameters when compared to the seven-ensemble model, but also performs better on complex datasets when compared to Residual Capsule Network.

[1]  Geoffrey E. Hinton,et al.  Transforming Auto-Encoders , 2011, ICANN.

[2]  Geoffrey E. Hinton,et al.  Matrix capsules with EM routing , 2018, ICLR.

[3]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[4]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Yang Jin,et al.  Capsule Network Performance on Complex Data , 2017, ArXiv.

[6]  Mohamed El-Sharkawy,et al.  Residual Capsule Network , 2019, 2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON).

[7]  Jürgen Schmidhuber,et al.  Highway Networks , 2015, ArXiv.

[8]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[9]  Geoffrey E. Hinton,et al.  Dynamic Routing Between Capsules , 2017, NIPS.