暂无分享,去创建一个
[1] Stefano Soatto,et al. Entropy-SGD: biasing gradient descent into wide valleys , 2016, ICLR.
[2] Subhransu Maji,et al. Meta-Learning With Differentiable Convex Optimization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Razvan Pascanu,et al. Sharp Minima Can Generalize For Deep Nets , 2017, ICML.
[4] Sergey Levine,et al. Meta-Learning with Implicit Gradients , 2019, NeurIPS.
[5] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[6] C A Nelson,et al. Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.
[7] Alexandre Lacoste,et al. TADAM: Task dependent adaptive metric for improved few-shot learning , 2018, NeurIPS.
[8] Chelsea Finn,et al. Meta-Learning without Memorization , 2020, ICLR.
[9] Abhishek Sinha,et al. Charting the Right Manifold: Manifold Mixup for Few-shot Learning , 2019, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV).
[10] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[11] Jürgen Schmidhuber,et al. Flat Minima , 1997, Neural Computation.
[12] Jorge Nocedal,et al. Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..
[13] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[14] Pietro Perona,et al. One-shot learning of object categories , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[15] Nicolas Le Roux,et al. On the interplay between noise and curvature and its effect on optimization and generalization , 2019, AISTATS.
[16] Razvan Pascanu,et al. Meta-Learning with Latent Embedding Optimization , 2018, ICLR.
[17] Andrew Gordon Wilson,et al. Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.
[18] Yoshua Bengio,et al. Three Factors Influencing Minima in SGD , 2017, ArXiv.
[19] Trevor Darrell,et al. A New Meta-Baseline for Few-Shot Learning , 2020, ArXiv.
[20] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[21] Vincent Gripon,et al. Leveraging the Feature Distribution in Transfer-based Few-Shot Learning , 2020, ArXiv.
[22] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[23] Sergey Levine,et al. Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm , 2017, ICLR.
[24] Fei Chao,et al. Task Augmentation by Rotating for Meta-Learning , 2020, ArXiv.
[25] Tao Xiang,et al. Learning to Compare: Relation Network for Few-Shot Learning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[26] Sergey Levine,et al. Probabilistic Model-Agnostic Meta-Learning , 2018, NeurIPS.
[27] Peter L. Bartlett,et al. Rademacher and Gaussian Complexities: Risk Bounds and Structural Results , 2003, J. Mach. Learn. Res..
[28] Jorge Nocedal,et al. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima , 2016, ICLR.
[29] Luca Bertinetto,et al. Meta-learning with differentiable closed-form solvers , 2018, ICLR.
[30] Yu-Chiang Frank Wang,et al. A Closer Look at Few-shot Classification , 2019, ICLR.
[31] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.