Relational Knowledge Distillation
暂无分享,去创建一个
Yan Lu | Minsu Cho | Dongju Kim | Wonpyo Park | Minsu Cho | Yan Lu | Wonpyo Park | Dongju Kim
[1] Zachary Chase Lipton,et al. Born Again Neural Networks , 2018, ICML.
[2] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[3] Bernhard Schölkopf,et al. Unifying distillation and privileged information , 2015, ICLR.
[4] Vittorio Ferrari,et al. Revisiting Knowledge Transfer for Training Object Class Detectors , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[5] Jonathan Krause,et al. 3D Object Representations for Fine-Grained Categorization , 2013, 2013 IEEE International Conference on Computer Vision Workshops.
[6] Zhi Zhang,et al. Fast Deep Neural Networks With Knowledge Guided Training and Predicted Regions of Interests for Real-Time Video Object Detection , 2018, IEEE Access.
[7] Xiaogang Wang,et al. Face Model Compression by Distilling Knowledge from Neurons , 2016, AAAI.
[8] Ali Farhadi,et al. Label Refinery: Improving ImageNet Classification through Label Progression , 2018, ArXiv.
[9] Leo Breiman,et al. BORN AGAIN TREES , 1996 .
[10] Dong Wang,et al. Learning to Navigate for Fine-grained Classification , 2018, ECCV.
[11] Alexander J. Smola,et al. Sampling Matters in Deep Embedding Learning , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[12] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[13] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[14] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Kihyuk Sohn,et al. Improved Deep Metric Learning with Multi-class N-pair Loss Objective , 2016, NIPS.
[16] Yuxin Peng,et al. Object-Part Attention Model for Fine-Grained Image Classification , 2017, IEEE Transactions on Image Processing.
[17] Peter Matthews,et al. A short history of structural linguistics , 2001 .
[18] Asit K. Mishra,et al. Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy , 2017, ICLR.
[19] Kaiming He,et al. Data Distillation: Towards Omni-Supervised Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[20] James Philbin,et al. FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Xiang Yu,et al. Deep Metric Learning via Lifted Structured Feature Embedding , 2016 .
[22] Qi Tian,et al. Picking Deep Filter Responses for Fine-Grained Image Recognition , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Zheng Xu,et al. Training Shallow and Thin Networks for Acceleration via Knowledge Distillation with Conditional Adversarial Networks , 2017, ICLR.
[24] Rauf Izmailov,et al. Learning using privileged information: similarity control and knowledge transfer , 2015, J. Mach. Learn. Res..
[25] Pietro Perona,et al. The Caltech-UCSD Birds-200-2011 Dataset , 2011 .
[26] Horst Possegger,et al. Deep Metric Learning with BIER: Boosting Independent Embeddings Robustly , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[27] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[28] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[29] Jian Wang,et al. Deep Metric Learning with Angular Loss , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Dan Alistarh,et al. Model compression via distillation and quantization , 2018, ICLR.
[32] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[33] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[34] Vineeth N. Balasubramanian,et al. Deep Model Compression: Distilling Knowledge from Noisy Teachers , 2016, ArXiv.
[35] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[36] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Amos J. Storkey,et al. Moonshine: Distilling with Cheap Convolutions , 2017, NeurIPS.
[38] F. Saussure,et al. Course in General Linguistics , 1960 .
[39] Naiyan Wang,et al. Like What You Like: Knowledge Distill via Neuron Selectivity Transfer , 2017, ArXiv.
[40] Zhaoxiang Zhang,et al. DarkRank: Accelerating Deep Metric Learning via Cross Sample Similarities Transfer , 2017, AAAI.
[41] Jungmin Lee,et al. Attention-based Ensemble for Deep Metric Learning , 2018, ECCV.
[42] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[43] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[44] Thad Starner,et al. Data-Free Knowledge Distillation for Deep Neural Networks , 2017, ArXiv.
[45] Joshua B. Tenenbaum,et al. Human-level concept learning through probabilistic program induction , 2015, Science.
[46] Tony X. Han,et al. Learning Efficient Object Detection Models with Knowledge Distillation , 2017, NIPS.