On Compressing Deep Models by Low Rank and Sparse Decomposition
暂无分享,去创建一个
Dacheng Tao | Tongliang Liu | Xinchao Wang | Xiyu Yu | D. Tao | Tongliang Liu | Xinchao Wang | Xiyu Yu
[1] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Bhiksha Raj,et al. Greedy sparsity-constrained optimization , 2011, 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR).
[3] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[4] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[5] Dacheng Tao,et al. GoDec: Randomized Lowrank & Sparse Matrix Decomposition in Noisy Case , 2011, ICML.
[6] Gene H. Golub,et al. Matrix computations , 1983 .
[7] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[8] Dacheng Tao,et al. Packing Convolutional Neural Networks in the Frequency Domain , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[9] Yoshio Takane,et al. Constrained Principal Component Analysis: A Comprehensive Theory , 2001, Applicable Algebra in Engineering, Communication and Computing.
[10] Xiaogang Wang,et al. Face Model Compression by Distilling Knowledge from Neurons , 2016, AAAI.
[11] Ming Yang,et al. Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.
[12] Igor Carron,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016 .
[13] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[14] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[15] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[16] Eunhyeok Park,et al. Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications , 2015, ICLR.
[17] Jian Sun,et al. Accelerating Very Deep Convolutional Networks for Classification and Detection , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[18] Kai Yu,et al. Reshaping deep neural network for fast decoding by node-pruning , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[19] Dacheng Tao,et al. Greedy Bilateral Sketch, Completion & Smoothing , 2013, AISTATS.
[20] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[21] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[23] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[24] Suvrit Sra,et al. Diversity Networks , 2015, ICLR.
[25] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[26] Xiaogang Wang,et al. Convolutional neural networks with low-rank regularization , 2015, ICLR.
[27] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.
[28] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[29] Dong Xu,et al. Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes , 2016, Neural Computation.
[30] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[31] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Misha Denil,et al. Predicting Parameters in Deep Learning , 2014 .
[33] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.