暂无分享,去创建一个
Sergey Levine | Chelsea Finn | Marvin Zhang | Henrik Marklund | Abhishek Gupta | S. Levine | Abhishek Gupta | Chelsea Finn | Marvin Zhang | H. Marklund
[1] Gang Niu,et al. Does Distributionally Robust Supervised Learning Give Robust Classifiers? , 2016, ICML.
[2] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[3] Yoshua Bengio,et al. On the Optimization of a Synaptic Learning Rule , 2007 .
[4] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[5] Sergey Levine,et al. Unsupervised Meta-Learning for Reinforcement Learning , 2018, ArXiv.
[6] Alexander J. Smola,et al. Detecting and Correcting for Label Shift with Black Box Predictors , 2018, ICML.
[7] Marco Pavone,et al. Continuous Meta-Learning without Tasks , 2019, NeurIPS.
[8] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[9] Neil D. Lawrence,et al. Dataset Shift in Machine Learning , 2009 .
[10] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[11] Joshua B. Tenenbaum,et al. Meta-Learning for Semi-Supervised Few-Shot Classification , 2018, ICLR.
[12] Qingming Huang,et al. Relay Backpropagation for Effective Learning of Deep Convolutional Neural Networks , 2015, ECCV.
[13] Junshan Zhang,et al. A Collaborative Learning Framework via Federated Meta-Learning , 2020, ArXiv.
[14] Anja De Waegenaere,et al. Robust Solutions of Optimization Problems Affected by Uncertain Probabilities , 2011, Manag. Sci..
[15] Victor S. Lempitsky,et al. Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.
[16] Sreeram Kannan,et al. Improving Federated Learning Personalization via Model Agnostic Meta Learning , 2019, ArXiv.
[17] Blaise Agüera y Arcas,et al. Communication-Efficient Learning of Deep Networks from Decentralized Data , 2016, AISTATS.
[18] Christoph H. Lampert,et al. Classifier adaptation at prediction time , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Trevor Darrell,et al. Fully Test-time Adaptation by Entropy Minimization , 2020, ArXiv.
[20] Eunho Yang,et al. Learning to Propagate Labels: Transductive Propagation Network for Few-Shot Learning , 2018, ICLR.
[21] Dawn Song,et al. Natural Adversarial Examples , 2019, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Fei Chen,et al. Federated Meta-Learning with Fast Convergence and Efficient Communication , 2018 .
[23] Trevor Darrell,et al. Adversarial Discriminative Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[24] Yee Whye Teh,et al. Conditional Neural Processes , 2018, ICML.
[25] Mehryar Mohri,et al. Sample Selection Bias Correction Theory , 2008, ALT.
[26] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[27] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[28] Michael I. Jordan,et al. Conditional Adversarial Domain Adaptation , 2017, NeurIPS.
[29] Junier B. Oliva,et al. Meta-Neighborhoods , 2019, NeurIPS.
[30] D. Lazer,et al. The Parable of Google Flu: Traps in Big Data Analysis , 2014, Science.
[31] Krishna P. Gummadi,et al. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment , 2016, WWW.
[32] Aryan Mokhtari,et al. Personalized Federated Learning: A Meta-Learning Approach , 2020, ArXiv.
[33] Carlos Fernandez-Granda,et al. Be Like Water: Robustness to Extraneous Variables Via Adaptive Feature Normalization , 2020, ArXiv.
[34] Diane J. Cook,et al. A Survey of Unsupervised Deep Domain Adaptation , 2018, ACM Trans. Intell. Syst. Technol..
[35] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[36] David Lopez-Paz,et al. In Search of Lost Domain Generalization , 2020, ICLR.
[37] Tian Li,et al. Fair Resource Allocation in Federated Learning , 2019, ICLR.
[38] Yongxin Yang,et al. Learning to Generalize: Meta-Learning for Domain Generalization , 2017, AAAI.
[39] Kiyoharu Aizawa,et al. Personalized Classifier for Food Image Recognition , 2018, IEEE Transactions on Multimedia.
[40] Sergey Levine,et al. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning , 2018, Robotics: Science and Systems.
[41] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[42] Amir Globerson,et al. Nightmare at test time: robust learning by feature deletion , 2006, ICML.
[43] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[44] Alex ChiChung Kot,et al. Domain Generalization with Adversarial Feature Learning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[45] Joshua B. Tenenbaum,et al. Infinite Mixture Prototypes for Few-Shot Learning , 2019, ICML.
[46] J. Schulman,et al. Reptile: a Scalable Metalearning Algorithm , 2018 .
[47] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[48] Yoshua Bengio,et al. MetaGAN: An Adversarial Approach to Few-Shot Learning , 2018, NeurIPS.
[49] Percy Liang,et al. Distributionally Robust Language Modeling , 2019, EMNLP.
[50] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[51] Sebastian Caldas,et al. LEAF: A Benchmark for Federated Settings , 2018, ArXiv.
[52] Jiaying Liu,et al. Revisiting Batch Normalization For Practical Domain Adaptation , 2016, ICLR.
[53] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[54] Gregory Cohen,et al. EMNIST: an extension of MNIST to handwritten letters , 2017, CVPR 2017.
[55] Yuan Shi,et al. Geodesic flow kernel for unsupervised domain adaptation , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[56] Yevgen Chebotar,et al. Meta Learning via Learned Loss , 2019, 2020 25th International Conference on Pattern Recognition (ICPR).
[57] Daan Wierstra,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016, ICML.
[58] Martial Hebert,et al. Learning to Model the Tail , 2017, NIPS.
[59] Daniel C. Castro,et al. Domain Generalization via Model-Agnostic Learning of Semantic Features , 2019, NeurIPS.
[60] M. KarthyekRajhaaA.,et al. Robust Wasserstein profile inference and applications to machine learning , 2019, J. Appl. Probab..
[61] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[62] Masashi Sugiyama,et al. Mixture Regression for Covariate Shift , 2006, NIPS.
[63] Amos Storkey,et al. Learning to learn via Self-Critique , 2019, ArXiv.
[64] Daniel Kuhn,et al. Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations , 2015, Mathematical Programming.
[65] Bernhard Schölkopf,et al. Correcting Sample Selection Bias by Unlabeled Data , 2006, NIPS.
[66] Alexei A. Efros,et al. Test-Time Training for Out-of-Distribution Generalization , 2019, ArXiv.
[67] Gilles Blanchard,et al. Generalizing from Several Related Classification Tasks to a New Unlabeled Sample , 2011, NIPS.
[68] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[69] Bernhard Schölkopf,et al. On causal and anticausal learning , 2012, ICML.
[70] Jascha Sohl-Dickstein,et al. Meta-Learning Update Rules for Unsupervised Representation Learning , 2018, ICLR.
[71] Vladimir Vapnik,et al. Statistical learning theory , 1998 .
[72] Brian C. Lovell,et al. Unsupervised Domain Adaptation by Domain Invariant Projection , 2013, 2013 IEEE International Conference on Computer Vision.
[73] Alexander D'Amour,et al. Evaluating Prediction-Time Batch Normalization for Robustness under Covariate Shift , 2020, ArXiv.
[74] Sergey Levine,et al. Unsupervised Learning via Meta-Learning , 2018, ICLR.
[75] H. Shimodaira,et al. Improving predictive inference under covariate shift by weighting the log-likelihood function , 2000 .
[76] Jiri Matas,et al. Improving CNN Classifiers by Estimating Test-Time Priors , 2018, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).
[77] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[78] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[79] Alexei A. Efros,et al. Test-Time Training with Self-Supervision for Generalization under Distribution Shifts , 2019, ICML.
[80] Percy Liang,et al. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization , 2019, ArXiv.
[81] Joshua Achiam,et al. On First-Order Meta-Learning Algorithms , 2018, ArXiv.
[82] John Duchi,et al. Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach , 2016, Math. Oper. Res..
[83] Sebastian Nowozin,et al. Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes , 2019, NeurIPS.
[84] Mehryar Mohri,et al. Agnostic Federated Learning , 2019, ICML.
[85] Matthias Bethge,et al. Improving robustness against common corruptions by covariate shift adaptation , 2020, NeurIPS.
[86] Gabriela Csurka,et al. Domain Adaptation for Visual Applications: A Comprehensive Survey , 2017, ArXiv.
[87] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[88] C A Nelson,et al. Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.
[89] Neil D. Lawrence,et al. Empirical Bayes Transductive Meta-Learning with Synthetic Gradients , 2020, ICLR.
[90] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[91] Atsuto Maki,et al. A systematic study of the class imbalance problem in convolutional neural networks , 2017, Neural Networks.
[92] Bernt Schiele,et al. Learning to Self-Train for Semi-Supervised Few-Shot Classification , 2019, NeurIPS.
[93] Brian D. Ziebart,et al. Robust Classification Under Sample Selection Bias , 2014, NIPS.