暂无分享,去创建一个
Jie Song | Gongfan Fang | Haofei Zhang | Kanya Mo | Mingli Song | Xinchao Wang | Shitao Bei | Xinchao Wang | Mingli Song | Jie Song | Gongfan Fang | Haofei Zhang | Kanya Mo | Shitao Bei
[1] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[2] Chen Chen,et al. Adversarial Self-Supervised Data Free Distillation for Text Classification , 2020, EMNLP.
[3] Lucas Beyer,et al. Big Transfer (BiT): General Visual Representation Learning , 2020, ECCV.
[4] Xinchao Wang,et al. Data-Free Adversarial Distillation , 2019, ArXiv.
[5] Amos Storkey,et al. Zero-shot Knowledge Transfer via Adversarial Belief Matching , 2019, NeurIPS.
[6] George Papandreou,et al. Rethinking Atrous Convolution for Semantic Image Segmentation , 2017, ArXiv.
[7] Jiayu Zhou,et al. Data-Free Knowledge Distillation for Heterogeneous Federated Learning , 2021, ICML.
[8] Qi Tian,et al. Data-Free Learning of Student Networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[9] Derek Hoiem,et al. Indoor Segmentation and Support Inference from RGBD Images , 2012, ECCV.
[10] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[11] Derek Hoiem,et al. Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Amos Storkey,et al. Meta-Learning in Neural Networks: A Survey , 2020, IEEE transactions on pattern analysis and machine intelligence.
[13] Jihwan P. Choi,et al. Data-Free Network Quantization With Adversarial Knowledge Distillation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[14] Li Sun,et al. Amalgamating Knowledge towards Comprehensive Classification , 2018, AAAI.
[15] Chao Xu,et al. Learning Student Networks in the Wild , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Andrew Howard,et al. Large-Scale Generative Data-Free Distillation , 2020, ArXiv.
[17] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[18] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[19] Thad Starner,et al. Data-Free Knowledge Distillation for Deep Neural Networks , 2017, ArXiv.
[20] Xiang Deng,et al. Graph-Free Knowledge Distillation for Graph Neural Networks , 2021, IJCAI.
[21] Xinchao Wang,et al. Mosaicking to Distill: Knowledge Distillation from Out-of-Domain Data , 2021, NeurIPS.
[22] Joshua Achiam,et al. On First-Order Meta-Learning Algorithms , 2018, ArXiv.
[23] Pavlo Molchanov,et al. Data-free Knowledge Distillation for Object Detection , 2021, 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).
[24] Mingli Song,et al. Contrastive Model Inversion for Data-Free Knowledge Distillation , 2021, ArXiv.
[25] D. Tao,et al. Distilling Knowledge From Graph Convolutional Networks , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Mingli Song,et al. Student Becoming the Master: Knowledge Amalgamation for Joint Scene Parsing, Depth Estimation, and More , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).