Complex-Valued Densely Connected Convolutional Networks

In recent years, deep learning has made significant progress in computer vision. However, most studies focus on the algorithms in real field. Complex number can have richer characterization capabilities and use fewer training parameters. This paper proposes a complex-valued densely connected convolutional network, which is complex-valued DenseNets. It generalizes the network structure of real-valued DenseNets to complex field and constructs the basic architectures including complex dense block and complex transition layers. Experiments were performed on the CIFAR-10 database and CIFAR-100 datasets. Experimental results show the proposed algorithm has a lower error rate and fewer parameters than real-valued DenseNets and complex-valued ResNets.

[1]  Anthony S. Maida,et al.  Deep Quaternion Networks , 2017, 2018 International Joint Conference on Neural Networks (IJCNN).

[2]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Sandeep Subramanian,et al.  Deep Complex Networks , 2017, ICLR.

[4]  Tülay Adali,et al.  Approximation by Fully Complex Multilayer Perceptrons , 2003, Neural Computation.

[5]  Stéphane Mallat,et al.  Deep roto-translation scattering for object classification , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[7]  Yee Whye Teh,et al.  A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.

[8]  Les E. Atlas,et al.  Full-Capacity Unitary Recurrent Neural Networks , 2016, NIPS.

[9]  Thomas Serre,et al.  Neuronal Synchrony in Complex-Valued Deep Networks , 2013, ICLR.

[10]  Cris Koutsougeras,et al.  Complex domain backpropagation , 1992 .

[11]  Akira Hirose,et al.  Generalization Characteristics of Complex-Valued Feedforward Neural Networks in Relation to Signal Coherence , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[12]  Yoshua Bengio,et al.  Unitary Evolution Recurrent Neural Networks , 2015, ICML.

[13]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Richard S. Zemel,et al.  Lending direction to neural networks , 1995, Neural Networks.

[15]  Alex Graves,et al.  Associative Long Short-Term Memory , 2016, ICML.

[16]  A.V. Oppenheim,et al.  The importance of phase in signals , 1980, Proceedings of the IEEE.

[17]  Jürgen Schmidhuber,et al.  Training Very Deep Networks , 2015, NIPS.