On the Theory of Transfer Learning: The Importance of Task Diversity
暂无分享,去创建一个
[1] M. Talagrand,et al. Probability in Banach Spaces: Isoperimetry and Processes , 1991 .
[2] P. Bickel. Efficient and Adaptive Estimation for Semiparametric Models , 1993 .
[3] K. Do,et al. Efficient and Adaptive Estimation for Semiparametric Models. , 1994 .
[4] Jonathan Baxter,et al. A Model of Inductive Bias Learning , 2000, J. Artif. Intell. Res..
[5] P. Massart,et al. About the constants in Talagrand's concentration inequalities for empirical processes , 2000 .
[6] Rich Caruana,et al. Multitask Learning , 1997, Machine Learning.
[7] P. Bartlett,et al. Local Rademacher complexities , 2005, math/0508275.
[8] Qi Li,et al. Nonparametric Econometrics: Theory and Practice , 2006 .
[9] Shai Ben-David,et al. A notion of task relatedness yielding provable multiple-task learning guarantees , 2008, Machine Learning.
[10] Claudio Gentile,et al. Linear Algorithms for Online Multitask Classification , 2010, COLT.
[11] S. Geer,et al. Oracle Inequalities and Optimal Inference under Group Sparsity , 2010, 1007.1771.
[12] Adam Tauman Kalai,et al. Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression , 2011, NIPS.
[13] Michael I. Jordan,et al. Union support recovery in high-dimensional multivariate regression , 2008, 2008 46th Annual Allerton Conference on Communication, Control, and Computing.
[14] Roman Vershynin,et al. Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.
[15] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[16] Massimiliano Pontil,et al. Excess risk bounds for multitask learning with trace norm regularization , 2012, COLT.
[17] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[18] Andreas Maurer,et al. A chain rule for the expected suprema of Gaussian processes , 2014, Theor. Comput. Sci..
[19] Trevor Darrell,et al. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.
[20] Subhashini Venugopalan,et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. , 2016, JAMA.
[21] Massimiliano Pontil,et al. The Benefit of Multitask Representation Learning , 2015, J. Mach. Learn. Res..
[22] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[23] Ohad Shamir,et al. Size-Independent Sample Complexity of Neural Networks , 2017, COLT.
[24] J. Plotkin,et al. Inferring the shape of global epistasis , 2018, Proceedings of the National Academy of Sciences.
[25] Massimiliano Pontil,et al. Online-Within-Online Meta-Learning , 2019, NeurIPS.
[26] Massimiliano Pontil,et al. Learning-to-Learn Stochastic Gradient Descent with Biased Regularization , 2019, ICML.
[27] Maria-Florina Balcan,et al. Provable Guarantees for Gradient-Based Meta-Learning , 2019, ICML.
[28] Martin J. Wainwright,et al. High-Dimensional Statistics , 2019 .
[29] Sergey Levine,et al. Online Meta-Learning , 2019, ICML.
[30] Burkhard Rost,et al. End-to-end multitask learning, from protein language to protein features without alignments , 2019, bioRxiv.
[31] Luke S. Zettlemoyer,et al. Cloze-driven Pretraining of Self-attention Networks , 2019, EMNLP.
[32] Maria-Florina Balcan,et al. Adaptive Gradient-Based Meta-Learning Methods , 2019, NeurIPS.
[33] Subhransu Maji,et al. Meta-Learning With Differentiable Convex Optimization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Oriol Vinyals,et al. Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML , 2019, ICLR.
[35] Timothy M. Hospedales,et al. Meta-Learning in Neural Networks: A Survey , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[36] S. Kakade,et al. Few-Shot Learning via Learning the Representation, Provably , 2020, ICLR.
[37] Michael I. Jordan,et al. Provable Meta-Learning of Linear Representations , 2020, ICML.