Multi-task few-shot learning with composed data augmentation for image classification

[1]  Pablo Piantanida,et al.  Transductive Information Maximization For Few-Shot Learning , 2020, ArXiv.

[2]  Rui Zhang,et al.  Robust Compare Network for Few-Shot Learning , 2020, IEEE Access.

[3]  Haoxiang Wang,et al.  Global Convergence and Induced Kernels of Gradient-Based Meta-Learning with Neural Nets , 2020, ArXiv.

[4]  M. Shah,et al.  Self-supervised Knowledge Distillation for Few-shot Learning , 2020, BMVC.

[5]  Timothy M. Hospedales,et al.  DADA: Differentiable Automatic Data Augmentation , 2020, ECCV.

[6]  Yuan He,et al.  Self-Supervised Learning for Few-Shot Image Classification , 2019, ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[7]  James T. Kwok,et al.  Generalizing from a Few Examples , 2019, ACM Comput. Surv..

[8]  Jiebo Luo,et al.  AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations Rather Than Data , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Xiaohua Zhai,et al.  Self-Supervised GANs via Auxiliary Rotation Loss , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Luca Bertinetto,et al.  Meta-learning with differentiable closed-form solvers , 2018, ICLR.

[11]  Joshua Achiam,et al.  On First-Order Meta-Learning Algorithms , 2018, ArXiv.

[12]  Julian N. Marewski,et al.  What can the brain teach us about building artificial intelligence? , 2016, Behavioral and Brain Sciences.

[13]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[14]  Joshua B. Tenenbaum,et al.  Human-level concept learning through probabilistic program induction , 2015, Science.