Recent work in the literature has shown experimentally that one can use the lower layers of a trained convolutional neural network (CNN) to model natural textures. More interestingly, it has also been experimentally shown that only one layer with random filters can also model textures although with less variability. In this paper we ask the question as to why one layer CNNs with random filters are so effective in generating textures? We theoretically show that one layer convolutional architectures (without a non-linearity) paired with the an energy function used in previous literature, can in fact preserve and modulate frequency coefficients in a manner so that random weights and pretrained weights will generate the same type of images. Based on the results of this analysis we question whether similar properties hold in the case where one uses one convolution layer with a non-linearity. We show that in the case of ReLu non-linearity there are situations where only one input will give the minimum possible energy whereas in the case of no nonlinearity, there are always infinite solutions that will give the minimum possible energy. Thus we can show that in certain situations adding a ReLu non-linearity generates less variable images.
[1]
Yoshua Bengio,et al.
Understanding the difficulty of training deep feedforward neural networks
,
2010,
AISTATS.
[2]
Jianhong Shen,et al.
Deblurring images: Matrices, spectra, and filtering
,
2007,
Math. Comput..
[3]
Leon A. Gatys,et al.
Texture Synthesis Using Shallow Convolutional Networks with Random Filters
,
2016,
ArXiv.
[4]
Yan Wang,et al.
A Powerful Generative Model Using Random Weights for the Deep Image Representation
,
2016,
NIPS.
[5]
Zhenghao Chen,et al.
On Random Weights and Unsupervised Feature Learning
,
2011,
ICML.
[6]
Bruno Galerne,et al.
Random Phase Textures: Theory and Synthesis
,
2011,
IEEE Transactions on Image Processing.
[7]
Leon A. Gatys,et al.
Texture Synthesis Using Convolutional Neural Networks
,
2015,
NIPS.