On the Compressive Power of Boolean Threshold Autoencoders

An autoencoder is a layered neural network whose structure can be viewed as consisting of an encoder, which compresses an input vector of dimension $D$ to a vector of low dimension $d$, and a decoder which transforms the low-dimensional vector back to the original input vector (or one that is very similar). In this paper we explore the compressive power of autoencoders that are Boolean threshold networks by studying the numbers of nodes and layers that are required to ensure that the numbers of nodes and layers that are required to ensure that each vector in a given set of distinct input binary vectors is transformed back to its original. We show that for any set of $n$ distinct vectors there exists a seven-layer autoencoder with the smallest possible middle layer, (i.e., its size is logarithmic in $n$), but that there is a set of $n$ vectors for which there is no three-layer autoencoder with a middle layer of the same size. In addition we present a kind of trade-off: if a considerably larger middle layer is permissible then a five-layer autoencoder does exist. We also study encoding by itself. The results we obtain suggest that it is the decoding that constitutes the bottleneck of autoencoding. For example, there always is a three-layer Boolean threshold encoder that compresses $n$ vectors into a dimension that is reduced to twice the logarithm of $n$.

[1]  S. Kauffman Metabolic stability and epigenesis in randomly constructed genetic nets. , 1969, Journal of theoretical biology.

[2]  Tatsuya Akutsu,et al.  Algorithms for Analysis, Inference, and Control of Boolean Networks , 2018 .

[3]  Kurt Hornik,et al.  Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.

[4]  D. Cheng,et al.  Analysis and control of Boolean networks: A semi-tensor product approach , 2010, 2009 7th Asian Control Conference.

[5]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[6]  Thomas Kailath,et al.  Depth-Size Tradeoffs for Neural Computation , 1991, IEEE Trans. Computers.

[7]  Yang Liu,et al.  Feedback Controller Design for the Synchronization of Boolean Control Networks , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Fangfei Li,et al.  Pinning Control Design for the Stabilization of Boolean Networks , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[9]  Pierre Baldi,et al.  Autoencoders, Unsupervised Learning, and Deep Architectures , 2011, ICML Unsupervised and Transfer Learning.

[10]  M. Anthony Discrete Mathematics of Neural Networks: Selected Topics , 1987 .

[11]  Samy Bengio,et al.  Understanding deep learning requires rethinking generalization , 2016, ICLR.

[12]  Daizhan Cheng,et al.  Control of Large-Scale Boolean Networks via Network Aggregation , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[13]  Carl Doersch,et al.  Tutorial on Variational Autoencoders , 2016, ArXiv.

[14]  Razvan Pascanu,et al.  On the Number of Linear Regions of Deep Neural Networks , 2014, NIPS.

[15]  Alán Aspuru-Guzik,et al.  Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules , 2016, ACS central science.

[16]  Robert C. Minnick,et al.  Linear-Input Logic , 1961, IRE Trans. Electron. Comput..

[17]  Olivier Bachem,et al.  Recent Advances in Autoencoder-Based Representation Learning , 2018, ArXiv.

[18]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[19]  Yang Liu,et al.  Survey on semi-tensor product method with its applications in logical networks and other finite-valued systems , 2017 .

[20]  Yoshua Bengio,et al.  Shallow vs. Deep Sum-Product Networks , 2011, NIPS.

[21]  Mohammed Bennamoun,et al.  On the Compressive Power of Deep Rectifier Networks for High Resolution Representation of Class Boundaries , 2017, ArXiv.

[22]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.