Deep convolutional neural networks (ConvNets) have rapidly grown in popularity due to their powerful capabilities in representing and modelling the high-level abstraction of complex data. However, ConvNets require an abundance of data to adequately train network parameters. To tackle this problem, we introduce the concept of stochastic receptive fields, where the receptive fields are stochastic realizations of a random field that obey a learned distribution. We study the efficacy of incorporating layers of stochastic receptive fields to a ConvNet to boost performance without the need for additional training data. Preliminary results showing an improvement in accuracy ( 2% drop in test error) was achieved by adding a layer of stochastic receptive fields to a ConvNet compared to adding a layer of fully-trained receptive fields, when training with a small training set consisting of 20% of the STL-10 dataset.
[1]
Alexander Wong,et al.
StochasticNet: Forming Deep Neural Networks via Stochastic Connectivity
,
2016,
IEEE Access.
[2]
Yann LeCun,et al.
Learning long‐range vision for autonomous off‐road driving
,
2009,
J. Field Robotics.
[3]
Yoshua Bengio,et al.
A Neural Probabilistic Language Model
,
2003,
J. Mach. Learn. Res..
[4]
Honglak Lee,et al.
An Analysis of Single-Layer Networks in Unsupervised Feature Learning
,
2011,
AISTATS.
[5]
Geoffrey E. Hinton,et al.
ImageNet classification with deep convolutional neural networks
,
2012,
Commun. ACM.
[6]
Erich Elsen,et al.
Deep Speech: Scaling up end-to-end speech recognition
,
2014,
ArXiv.