Sparseness Ratio Allocation and Neuron Re-pruning for Neural Networks Compression
暂无分享,去创建一个
Jinjia Zhou | Dajiang Zhou | Li Guo | Shinji Kimura | S. Kimura | Dajiang Zhou | Jinjia Zhou | Li Guo
[1] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[2] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[3] Bin Yu,et al. Structural Compression of Convolutional Neural Networks Based on Greedy Filter Pruning , 2017, ArXiv.
[4] Song Han,et al. EIE: Efficient Inference Engine on Compressed Deep Neural Network , 2016, 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA).
[5] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[6] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[7] Dajiang Zhou,et al. Chain-NN: An energy-efficient 1D chain architecture for accelerating deep convolutional neural networks , 2017, Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.
[8] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.
[9] Vivienne Sze,et al. Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).