Training Lightweight yet Competent Network via Transferring Complementary Features
暂无分享,去创建一个
Hai-gang Gong | Ming Liu | Xiaobing Zhang | Minghui Liu | Shijian Lu | Ming Liu | Xiaobing Zhang | Shijian Lu | Hai-gang Gong | Minghui Liu
[1] Shijian Lu,et al. AMLN: Adversarial-Based Mutual Learning Network for Online Knowledge Distillation , 2020, ECCV.
[2] Tao Mei,et al. KTAN: Knowledge Transfer Adversarial Network , 2018, 2020 International Joint Conference on Neural Networks (IJCNN).
[3] Fan Yang,et al. Understanding Pictograph with Facial Features: End-to-End Sentence-Level Lip Reading of Chinese , 2019, AAAI.
[4] Sangdoo Yun,et al. A Comprehensive Overhaul of Feature Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[5] Takeshi Naemura,et al. Classification-Reconstruction Learning for Open-Set Recognition , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Jin Young Choi,et al. Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons , 2018, AAAI.
[7] Fabio Galasso,et al. Adversarial Network Compression , 2018, ECCV Workshops.
[8] Jangho Kim,et al. Paraphrasing Complex Network: Network Compression via Factor Transfer , 2018, NeurIPS.
[9] Zheng Xu,et al. Training Student Networks for Acceleration with Conditional Adversarial Networks , 2018, BMVC.
[10] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Mengjie Zhang,et al. Deep Reconstruction-Classification Networks for Unsupervised Domain Adaptation , 2016, ECCV.
[12] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[13] Rich Caruana,et al. Model compression , 2006, KDD '06.