Hierarchical Knowledge Squeezed Adversarial Network Compression
暂无分享,去创建一个
Yanyun Qu | Peng Li | Yuan Xie | Hui Kong | changyong shu | Yanyun Qu | Yuan Xie | Peng Li | Hui Kong | Changyong Shu | Peng Li
[1] Ronald A. Rensink. The Dynamic Representation of Scenes , 2000 .
[2] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[4] Le Song,et al. Deep Fried Convnets , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Andrew Zisserman,et al. Recurrent Human Pose Estimation , 2016, 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017).
[7] Nan Yang,et al. Attention-Guided Answer Distillation for Machine Reading Comprehension , 2018, EMNLP.
[8] Jürgen Schmidhuber,et al. Learning Complex, Extended Sequences Using the Principle of History Compression , 1992, Neural Computation.
[9] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[10] Vincent Lepetit,et al. Learning Separable Filters , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.
[11] Ivan V. Oseledets,et al. Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition , 2014, ICLR.
[12] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[13] Ran El-Yaniv,et al. Binarized Neural Networks , 2016, NIPS.
[14] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[15] Neil D. Lawrence,et al. Variational Information Distillation for Knowledge Transfer , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Naiyan Wang,et al. Like What You Like: Knowledge Distill via Neuron Selectivity Transfer , 2017, ArXiv.
[17] Xiangyu Zhang,et al. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[19] Ali Farhadi,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016, ECCV.
[20] Fabio Galasso,et al. Adversarial Network Compression , 2018, ECCV Workshops.
[21] Zheng Xu,et al. Training Student Networks for Acceleration with Conditional Adversarial Networks , 2018, BMVC.
[22] Philip H. S. Torr,et al. Learn To Pay Attention , 2018, ICLR.
[23] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[24] Song Han,et al. Trained Ternary Quantization , 2016, ICLR.
[25] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[26] Joan Bruna,et al. Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation , 2014, NIPS.
[27] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[28] Ron Meir,et al. Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights , 2014, NIPS.