Viterbi-based Pruning for Sparse Matrix with Fixed and High Index Compression Ratio
暂无分享,去创建一个
Dongsoo Lee | Jae-Joon Kim | Taesu Kim | Pierce I-Jen Chuang | Daehyun Ahn | Jae-Joon Kim | Dongsoo Lee | Daehyun Ahn | Taesu Kim | P. Chuang
[1] Max Welling,et al. Variational Dropout and the Local Reparameterization Trick , 2015, NIPS 2015.
[2] Arash Ardakani,et al. Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks , 2016, ICLR.
[3] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[4] Yves Chauvin,et al. A Back-Propagation Algorithm with Optimal Use of Hidden Units , 1988, NIPS.
[5] Song Han,et al. EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[6] Xinmiao Zhang. VLSI Architectures for Modern Error-Correcting Codes , 2015 .
[7] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[8] Max Welling,et al. Bayesian Compression for Deep Learning , 2017, NIPS.
[9] C. E. SHANNON,et al. A mathematical theory of communication , 1948, MOCO.
[10] Jr. G. Forney,et al. The viterbi algorithm , 1973 .
[11] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[12] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[13] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.
[14] H.-L. Lou,et al. Implementing the Viterbi algorithm , 1995, IEEE Signal Process. Mag..
[15] Yixin Chen,et al. Compressing Neural Networks with the Hashing Trick , 2015, ICML.
[16] Lorien Y. Pratt,et al. Comparing Biases for Minimal Network Construction with Back-Propagation , 1988, NIPS.