Randnet: Deep Learning with Compressed Measurements of Images

Principal component analysis, dictionary learning, and auto-encoders are all unsupervised methods for learning representations from a large amount of training data. In all these methods, the higher the dimensions of the input data, the longer it takes to learn. We introduce a class of neural networks, termed RandNet, for learning representations using compressed random measurements of data of interest, such as images. RandNet extends the convolutional recurrent sparse auto-encoder architecture to dense networks and, more importantly, to the case when the input data are compressed random measurements of the original data. Compressing the input data makes it possible to fit a larger number of batches in memory during training. Moreover, in the case of sparse measurements, training is more efficient computationally. We demonstrate that, in unsupervised settings, RandNet performs dictionary learning using compressed data. In supervised settings, we show that RandNet can classify MNIST images with minimal loss in accuracy, despite being trained with random projections of the images that result in a 50% reduction in size. Overall, our results provide a general principled framework for training neural networks using compressed data.

[1]  Stephen Becker,et al.  Efficient dictionary learning via very sparse random projections , 2015, 2015 International Conference on Sampling Theory and Applications (SampTA).

[2]  E. Candès The restricted isometry property and its implications for compressed sensing , 2008 .

[3]  Michael Elad,et al.  Convolutional Neural Networks Analyzed via Convolutional Sparse Coding , 2016, J. Mach. Learn. Res..

[4]  Demba Ba Deeply-Sparse Signal rePresentations ($\text{D}\text{S}^2\text{P}$) , 2018 .

[5]  Prateek Jain,et al.  Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization , 2013, SIAM J. Optim..

[6]  Bahareh Tolooshams,et al.  Deep Residual Autoencoders for Expectation Maximization-Inspired Dictionary Learning. , 2020, IEEE transactions on neural networks and learning systems.

[7]  Bahareh Tolooshams,et al.  Deep Residual Auto-Encoders for Expectation Maximization-based Dictionary Learning , 2019, ArXiv.

[8]  Yann LeCun,et al.  Discriminative Recurrent Sparse Auto-Encoders , 2013, ICLR.

[9]  Guillermo Sapiro,et al.  Supervised Dictionary Learning , 2008, NIPS.

[10]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[11]  Shannon M. Hughes,et al.  Memory and Computation Efficient PCA via Very Sparse Random Projections , 2014, ICML.

[12]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[13]  Bahareh Tolooshams,et al.  SCALABLE CONVOLUTIONAL DICTIONARY LEARNING WITH CONSTRAINED RECURRENT SPARSE AUTO-ENCODERS , 2018, 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP).

[14]  Shannon M. Hughes,et al.  Compressive K-SVD , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[15]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.