Few-Shot Class-Incremental Learning via Relation Knowledge Distillation
暂无分享,去创建一个
Yihong Gong | Xiaopeng Hong | Xiaoyu Tao | Xinyuan Chang | Xing Wei | Songlin Dong | Yihong Gong | Xiaopeng Hong | Xing Wei | Songlin Dong | Xiaoyu Tao | Xinyuan Chang
[1] Nikos Komodakis,et al. Dynamic Few-Shot Visual Learning Without Forgetting , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[2] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[3] Byoung-Tak Zhang,et al. Overcoming Catastrophic Forgetting by Incremental Moment Matching , 2017, NIPS.
[4] Jin Young Choi,et al. Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons , 2018, AAAI.
[5] Jianping Gou,et al. Knowledge Distillation: A Survey , 2020, International Journal of Computer Vision.
[6] Quanfu Fan,et al. Relationship Matters: Relation Guided Knowledge Transfer for Incremental Learning of Object Detectors , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[7] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[8] Fahad Shahbaz Khan,et al. iTAML: An Incremental Task-Agnostic Meta-learning Approach , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Renjie Liao,et al. Incremental Few-Shot Learning with Attention Attractor Networks , 2018, NeurIPS.
[10] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[11] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[12] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[13] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[14] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[15] Cordelia Schmid,et al. End-to-End Incremental Learning , 2018, ECCV.
[16] Yongjian Fu,et al. Few-Shot Class-Incremental Learning via Feature Space Composition , 2020, ArXiv.
[17] Yihong Gong,et al. Class-Incremental Learning with Topological Schemas of Memory Spaces , 2021, 2020 25th International Conference on Pattern Recognition (ICPR).
[18] Tao Xiang,et al. Learning to Compare: Relation Network for Few-Shot Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[19] Joost van de Weijer,et al. Semantic Drift Compensation for Class-Incremental Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Christoph H. Lampert,et al. iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[22] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[23] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Alexandros Karatzoglou,et al. Overcoming Catastrophic Forgetting with Hard Attention to the Task , 2018 .
[25] Gabriela Csurka,et al. Distance-Based Image Classification: Generalizing to New Classes at Near-Zero Cost , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[26] Xiaopeng Hong,et al. Topology-Preserving Class-Incremental Learning , 2020, ECCV.
[27] Yandong Guo,et al. Large Scale Incremental Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] J. Schulman,et al. Reptile: a Scalable Metalearning Algorithm , 2018 .
[29] Svetlana Lazebnik,et al. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[30] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Xiaolin Hu,et al. Online Knowledge Distillation via Collaborative Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[33] Svetlana Lazebnik,et al. Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights , 2018, ECCV.
[34] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[35] Weiwei Shi,et al. Analogy-Detail Networks for Object Recognition. , 2020, IEEE transactions on neural networks and learning systems.
[36] Yan Lu,et al. Relational Knowledge Distillation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[38] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[39] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[40] Yihong Gong,et al. Multi-Target Multi-Camera Tracking by Tracklet-to-Target Assignment , 2020, IEEE Transactions on Image Processing.
[41] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[42] Bogdan Raducanu,et al. Memory Replay GANs: Learning to Generate New Categories without Forgetting , 2018, NeurIPS.
[43] Xiaopeng Hong,et al. Few-Shot Class-Incremental Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Xiaopeng Hong,et al. Bi-Objective Continual Learning: Learning 'New' While Consolidating 'Known' , 2020, AAAI.
[45] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[46] Pietro Perona,et al. The Caltech-UCSD Birds-200-2011 Dataset , 2011 .
[47] Dahua Lin,et al. Learning a Unified Classifier Incrementally via Rebalancing , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).