暂无分享,去创建一个
Richard Socher | Caiming Xiong | Zhenhui Li | Mehrdad Mahdavi | Yingbo Zhou | Huaxiu Yao | R. Socher | Caiming Xiong | Yingbo Zhou | M. Mahdavi | Z. Li | Huaxiu Yao
[1] Sung Whan Yoon,et al. TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning , 2019, ICML.
[2] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[3] Thomas L. Griffiths,et al. Recasting Gradient-Based Meta-Learning as Hierarchical Bayes , 2018, ICLR.
[4] Sergey Levine,et al. Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL , 2018, ICLR.
[5] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[6] Sergey Levine,et al. Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm , 2017, ICLR.
[7] Richard Socher,et al. Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting , 2019, ICML.
[8] Hugo Larochelle,et al. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples , 2019, ICLR.
[9] Yang Yu,et al. Out-of-Domain Detection for Low-Resource Text Classification Tasks , 2019, EMNLP.
[10] Ying Wei,et al. Hierarchically Structured Meta-learning , 2019, ICML.
[11] Seungjin Choi,et al. Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace , 2018, ICML.
[12] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[13] Pieter Abbeel,et al. Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments , 2017, ICLR.
[14] Pieter Abbeel,et al. A Simple Neural Attentive Meta-Learner , 2017, ICLR.
[15] J. Schulman,et al. Reptile: a Scalable Metalearning Algorithm , 2018 .
[16] Hang Li,et al. Meta-SGD: Learning to Learn Quickly for Few Shot Learning , 2017, ArXiv.
[17] Alexandre Lacoste,et al. TADAM: Task dependent adaptive metric for improved few-shot learning , 2018, NeurIPS.
[18] Christoph H. Lampert,et al. iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[20] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[21] Yong Wang,et al. Meta-Learning for Low-Resource Neural Machine Translation , 2018, EMNLP.
[22] Sergey Levine,et al. Online Meta-Learning , 2019, ICML.
[23] Joseph J. Lim,et al. Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation , 2019, NeurIPS.
[24] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[25] Yi Yang,et al. Transductive Propagation Network for Few-shot Learning , 2018, ArXiv.
[26] Tao Xiang,et al. Learning to Compare: Relation Network for Few-Shot Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[27] Sergey Levine,et al. Probabilistic Model-Agnostic Meta-Learning , 2018, NeurIPS.
[28] Joan Bruna,et al. Few-Shot Learning with Graph Neural Networks , 2017, ICLR.
[29] Trevor Darrell,et al. Frustratingly Simple Few-Shot Object Detection , 2020, ICML.
[30] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[31] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[32] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[33] Xian Wu,et al. Automated Relational Meta-learning , 2020, ICLR.
[34] Razvan Pascanu,et al. Meta-Learning with Warped Gradient Descent , 2020, ICLR.
[35] Richard E. Turner,et al. Continual Learning with Adaptive Weights (CLAW) , 2019, ICLR.
[36] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[37] Razvan Pascanu,et al. Meta-Learning with Latent Embedding Optimization , 2018, ICLR.
[38] Philip H. S. Torr,et al. Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence , 2018, ECCV.
[39] Thomas L. Griffiths,et al. Reconciling meta-learning and continual learning with online mixtures of tasks , 2018, NeurIPS.
[40] Leslie Pack Kaelbling,et al. Modular meta-learning , 2018, CoRL.
[41] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[42] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[43] Sergey Levine,et al. Meta-Learning with Implicit Gradients , 2019, NeurIPS.
[44] Yiming Yang,et al. DARTS: Differentiable Architecture Search , 2018, ICLR.