MetaMix: Towards Corruption-Robust Continual Learning with Temporally Self-Adaptive Data Transformation
暂无分享,去创建一个
[1] Dacheng Tao,et al. Balancing Stability and Plasticity through Advanced Null Space in Continual Learning , 2022, ECCV.
[2] Jiaxian Guo,et al. Online Continual Learning with Contrastive Vision Transformer , 2022, ECCV.
[3] Qiuling Suo,et al. Improving Task-free Continual Learning by Distributionally Robust Memory Evolution , 2022, ICML.
[4] Zhenyi Wang,et al. Learning to Learn and Remember Super Long Multi-Domain Task Sequence , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Nilesh A. Ahuja,et al. Continual Active Adaptation to Evolving Distributional Shifts , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[6] Kun-Juan Wei,et al. Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Jennifer G. Dy,et al. DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning , 2022, ECCV.
[8] Jung-Woo Ha,et al. Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Jie Song,et al. Meta-attention for ViT-backed Continual Learning , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Dong Gong,et al. Learning Bayesian Sparse Networks with Full Experience Replay for Continual Learning , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Zhenguo Li,et al. Memory Replay with Data Compression for Continual Learning , 2022, ICLR.
[12] Junshan Zhang,et al. TRGP: Trust Region Gradient Projection for Continual Learning , 2022, ICLR.
[13] Elahe Arani,et al. Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System , 2022, ICLR.
[14] Jennifer G. Dy,et al. Learning to Prompt for Continual Learning , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Gunhee Kim,et al. Continual Learning on Noisy Data Streams via Self-Purified Replay , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[16] Qiuling Suo,et al. Meta Learning on a Sequence of Imbalanced Domains with Difficulty Awareness , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[17] Wei Hu,et al. A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning , 2021, ICML.
[18] Qiang Liu,et al. MaxUp: Lightweight Adversarial Training with Data Augmentation Improves Neural Network Training , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Brian Lester,et al. The Power of Scale for Parameter-Efficient Prompt Tuning , 2021, EMNLP.
[20] Timothy A. Mann,et al. Defending Against Image Corruptions Through Adversarial Augmentations , 2021, ICLR.
[21] Kaushik Roy,et al. Gradient Projection Memory for Continual Learning , 2021, ICLR.
[22] Benjamin F. Grewe,et al. Posterior Meta-Replay for Continual Learning , 2021, NeurIPS.
[23] Gunshi Gupta,et al. La-MAML: Look-ahead Meta Learning for Continual Learning , 2020, NeurIPS.
[24] D. Song,et al. The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[25] Taesup Moon,et al. CPR: Classifier-Projection Regularization for Continual Learning , 2020, ICLR.
[26] Simone Calderara,et al. Dark Experience for General Continual Learning: a Strong, Simple Baseline , 2020, NeurIPS.
[27] Adrian Popescu,et al. IL2M: Class Incremental Learning With Dual Memory , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[28] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[29] Tinne Tuytelaars,et al. Online Continual Learning with Maximally Interfered Retrieval , 2019, ArXiv.
[30] Trevor Darrell,et al. Uncertainty-guided Continual Learning with Bayesian Neural Networks , 2019, ICLR.
[31] Seong Joon Oh,et al. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[32] Andreas S. Tolias,et al. Three scenarios for continual learning , 2019, ArXiv.
[33] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2019, ICLR.
[34] Marc'Aurelio Ranzato,et al. Continual Learning with Tiny Episodic Memories , 2019, ArXiv.
[35] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[36] G. Tesauro,et al. Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , 2018, ICLR.
[37] Quoc V. Le,et al. AutoAugment: Learning Augmentation Policies from Data , 2018, ArXiv.
[38] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[39] Alexandros Karatzoglou,et al. Overcoming catastrophic forgetting with hard attention to the task , 2018, ICML.
[40] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[41] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[42] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[43] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[44] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[45] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[46] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[47] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[48] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[49] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[50] Andrei A. Rusu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[51] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[52] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[53] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[54] E. Todeva. Networks , 2007 .
[55] Tim G. J. Rudner,et al. Continual Learning via Sequential Function-Space Variational Inference , 2023, ICML.
[56] Eunwoo Kim,et al. Helpful or Harmful: Inter-task Association in Continual Learning , 2022, ECCV.
[57] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .