Leveraging Task Variability in Meta-learning
暂无分享,去创建一个
[1] Yu Wang,et al. Meta-learning with an Adaptive Task Scheduler , 2021, NeurIPS.
[2] Peter Stone,et al. Conflict-Averse Gradient Descent for Multi-task Learning , 2021, NeurIPS.
[3] Yuekai Sun,et al. On sensitivity of meta-learning to support data , 2021, NeurIPS.
[4] Stefano Soatto,et al. Uniform Sampling over Episode Difficulty , 2021, NeurIPS.
[5] Chelsea Finn,et al. Just Train Twice: Improving Group Robustness without Training Group Information , 2021, ICML.
[6] Sebastian Nowozin,et al. Memory Efficient Meta-Learning with Large Images , 2021, NeurIPS.
[7] Sung Ju Hwang,et al. Large-Scale Meta-Learning with Continual Trajectory Shifting , 2021, ICML.
[8] Narayanan C. Krishnan,et al. Stress Testing of Meta-learning Approaches for Few-shot Learning , 2021, MetaDL@AAAI.
[9] Ricardo Luna Gutierrez,et al. Information-theoretic Task Selection for Meta-Reinforcement Learning , 2020, Neural Information Processing Systems.
[10] Sébastien M. R. Arnold,et al. learn2learn: A Library for Meta-Learning Research , 2020, ArXiv.
[11] Zhihao Wang,et al. Adaptive Task Sampling for Meta-Learning , 2020, ECCV.
[12] Marc Peter Deisenroth,et al. Probabilistic Active Meta-Learning , 2020, NeurIPS.
[13] S. Gelly,et al. Big Transfer (BiT): General Visual Representation Learning , 2019, ECCV.
[14] Kate Saenko,et al. A Broader Study of Cross-Domain Few-Shot Learning , 2019, ECCV.
[15] Bernt Schiele,et al. Meta-Transfer Learning Through Hard Tasks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[16] André Susano Pinto,et al. A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark , 2019, 1910.04867.
[17] Oriol Vinyals,et al. Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML , 2019, ICLR.
[18] Stefano Soatto,et al. A Baseline for Few-Shot Image Classification , 2019, ICLR.
[19] Yu-Chiang Frank Wang,et al. A Closer Look at Few-shot Classification , 2019, ICLR.
[20] Hugo Larochelle,et al. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples , 2019, ICLR.
[21] Jian Li,et al. On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning , 2019, ICLR.
[22] Bernt Schiele,et al. Meta-Transfer Learning for Few-Shot Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Amos J. Storkey,et al. How to train your MAML , 2018, ICLR.
[24] Razvan Pascanu,et al. Meta-Learning with Latent Embedding Optimization , 2018, ICLR.
[25] Alexandre Lacoste,et al. TADAM: Task dependent adaptive metric for improved few-shot learning , 2018, NeurIPS.
[26] Mubarak Shah,et al. Task Agnostic Meta-Learning for Few-Shot Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Bin Yang,et al. Learning to Reweight Examples for Robust Deep Learning , 2018, ICML.
[28] Joshua B. Tenenbaum,et al. Meta-Learning for Semi-Supervised Few-Shot Classification , 2018, ICLR.
[29] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.
[30] Kaiming He,et al. Focal Loss for Dense Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[31] Hang Li,et al. Meta-SGD: Learning to Learn Quickly for Few Shot Learning , 2017, ArXiv.
[32] Andrew McCallum,et al. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples , 2017, NIPS.
[33] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[34] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[35] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[36] Abhinav Gupta,et al. Training Region-Based Object Detectors with Online Hard Example Mining , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Tong Zhang,et al. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization , 2014, ICML.
[38] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[39] Daphne Koller,et al. Self-Paced Learning for Latent Variable Models , 2010, NIPS.
[40] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[41] George Loizou,et al. Computer vision and pattern recognition , 2007, Int. J. Comput. Math..
[42] S. Hochreiter,et al. Long Short-Term Memory , 1997, Neural Computation.
[43] H. Kahn,et al. Methods of Reducing Sample Size in Monte Carlo Computations , 1953, Oper. Res..
[44] H. Larochelle,et al. A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches , 2021, NeurIPS Datasets and Benchmarks.
[45] SeYoung Yun,et al. BOIL: Towards Representation Change for Few-shot Learning , 2021, ICLR.