Random Convolutional Neural Network Based on Distributed Computing with Decentralized Architecture

In recent years, deep learning has made great progress in image classification and detection. Popular deep learning algorithms rely on deep networks and multiple rounds of back-propagations. In this paper, we propose two approaches to accelerate deep networks. One is expanding the width of every layer. We reference to the Extreme Learning Machine, setting big number of convolution kernels to extract features in parallel. It can obtain multiscale features and improve network efficiency. The other is freezing part of layers. It can reduce back-propagations and speed up the training procedure. From the above, it is a random convolution architecture that network is proposed for image classification. In our architecture, every combination of random convolutions extracts distinct features. Apparently, we need a lot of experiments to choose the best combination. However, centralized computing may limit the number of combinations. Therefore, a decentralized architecture is used to enable the use of multiple combinations.