Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
暂无分享,去创建一个
Kaisheng Ma | Linfeng Zhang | Chenglong Bao | Jingwei Chen | Jiebo Song | Anni Gao | Jiebo Song | Kaisheng Ma | Chenglong Bao | Linfeng Zhang | Anni Gao | Jingwei Chen
[1] Igor Carron,et al. XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks , 2016 .
[2] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[3] Ali Farhadi,et al. Label Refinery: Improving ImageNet Classification through Label Progression , 2018, ArXiv.
[4] Tao Mei,et al. KTAN: Knowledge Transfer Adversarial Network , 2018, 2020 International Joint Conference on Neural Networks (IJCNN).
[5] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Zhuowen Tu,et al. Aggregated Residual Transformations for Deep Neural Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Zachary Chase Lipton,et al. Born Again Neural Networks , 2018, ICML.
[8] Tao Mei,et al. Look Closer to See Better: Recurrent Attention Convolutional Neural Network for Fine-Grained Image Recognition , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Jitendra Malik,et al. Cross Modal Distillation for Supervision Transfer , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Ning Xu,et al. Slimmable Neural Networks , 2018, ICLR.
[11] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[12] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[13] Larry S. Davis,et al. BlockDrop: Dynamic Inference Paths in Residual Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[14] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[15] Bo Chen,et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.
[16] Gang Wang,et al. Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[17] Hao Chen,et al. 3D Deeply Supervised Network for Automatic Liver Segmentation from CT Volumes , 2016, MICCAI.
[18] Kaiming He,et al. Focal Loss for Dense Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[19] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[20] Wei Liu,et al. SSD: Single Shot MultiBox Detector , 2015, ECCV.
[21] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[22] Kaiming He,et al. Feature Pyramid Networks for Object Detection , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Serge Belongie,et al. Convolutional Networks with Adaptive Inference Graphs , 2019, International Journal of Computer Vision.
[24] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[25] Rich Caruana,et al. Model compression , 2006, KDD '06.
[26] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[27] Joachim Denzler,et al. Impatient DNNs - Deep Neural Networks with Dynamic Time Budgets , 2016, BMVC.
[28] Kilian Q. Weinberger,et al. Deep Networks with Stochastic Depth , 2016, ECCV.
[29] Zhiqiang Shen,et al. MEAL: Multi-Model Ensemble via Adversarial Learning , 2018, AAAI.
[31] Kilian Q. Weinberger,et al. Multi-Scale Dense Networks for Resource Efficient Image Classification , 2017, ICLR.
[32] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[33] Zhuowen Tu,et al. Deeply-Supervised Nets , 2014, AISTATS.
[34] Yoshua Bengio,et al. BinaryConnect: Training Deep Neural Networks with binary weights during propagations , 2015, NIPS.
[35] Hao Chen,et al. Volumetric ConvNets with Mixed Residual Connections for Automated Prostate Segmentation from 3D MR Images , 2017, AAAI.
[36] Xin Wang,et al. SkipNet: Learning Dynamic Routing in Convolutional Networks , 2017, ECCV.
[37] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[38] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[39] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[40] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[41] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[42] Huchuan Lu,et al. Deep Mutual Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[43] Junmo Kim,et al. Deep Pyramidal Residual Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Alex Graves,et al. Recurrent Models of Visual Attention , 2014, NIPS.
[45] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.