暂无分享,去创建一个
[1] Andrew K. Lampinen,et al. What shapes feature representations? Exploring datasets, architectures, and training , 2020, NeurIPS.
[2] Lior Wolf,et al. The Multiverse Loss for Robust Transfer Learning , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Moinuddin K. Qureshi,et al. Improving Adversarial Robustness of Ensembles with Diversity Training , 2019, ArXiv.
[4] Matthieu Cord,et al. RUBi: Reducing Unimodal Biases in Visual Question Answering , 2019, NeurIPS.
[5] Luigi Gresele,et al. Learning explanations that are hard to vary , 2020, ArXiv.
[6] Barbara Caputo,et al. Domain Generalization with Domain-Specific Aggregation Modules , 2018, GCPR.
[7] Simon Lucey,et al. Dataless Model Selection With the Deep Frame Potential , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[9] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[10] Percy Liang,et al. An Investigation of Why Overparameterization Exacerbates Spurious Correlations , 2020, ICML.
[11] Hongjing Lu,et al. Deep convolutional networks do not classify based on global object shape , 2018, PLoS Comput. Biol..
[12] Prateek Jain,et al. The Pitfalls of Simplicity Bias in Neural Networks , 2020, NeurIPS.
[13] D. Wolpert. The Supervised Learning No-Free-Lunch Theorems , 2002 .
[14] Fred Zhang,et al. SGD on Neural Networks Learns Functions of Increasing Complexity , 2019, NeurIPS.
[15] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[16] Seong Joon Oh,et al. Learning De-biased Representations with Biased Representations , 2019, ICML.
[17] Iryna Gurevych,et al. Towards Debiasing NLU Models from Unknown Biases , 2020, EMNLP.
[18] Aaron C. Courville,et al. Out-of-Distribution Generalization via Risk Extrapolation (REx) , 2020, ICML.
[19] Pengtao Xie,et al. On the Generalization Error Bounds of Neural Networks under Diversity-Inducing Mutual Angular Regularization , 2015, ArXiv.
[20] James Henderson,et al. Simple but effective techniques to reduce biases , 2019, ArXiv.
[21] David Lopez-Paz,et al. In Search of Lost Domain Generalization , 2020, ICLR.
[22] J. Peters,et al. Invariant Causal Prediction for Sequential Data , 2017, Journal of the American Statistical Association.
[23] Fuxin Li,et al. HyperGAN: A Generative Model for Diverse, Performant Neural Networks , 2019, ICML.
[24] Itzik Malkiel,et al. Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models , 2021, EACL.
[25] Aleksander Madry,et al. Noise or Signal: The Role of Image Backgrounds in Object Recognition , 2020, ICLR.
[26] Yunde Jia,et al. Overcoming Language Priors in VQA via Decomposed Linguistic Representations , 2020, AAAI.
[27] Pavlo Molchanov,et al. SCOPS: Self-Supervised Co-Part Segmentation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Uri Shalit,et al. On Calibration and Out-of-domain Generalization , 2021, NeurIPS.
[29] Luke Zettlemoyer,et al. Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles , 2020, FINDINGS.
[30] Yoshua Bengio,et al. Towards Causal Representation Learning , 2021, ArXiv.
[31] Sameer Singh,et al. Beyond Accuracy: Behavioral Testing of NLP Models with CheckList , 2020, ACL.
[32] Harris Drucker,et al. Improving generalization performance using double backpropagation , 1992, IEEE Trans. Neural Networks.
[33] M. Bethge,et al. Shortcut learning in deep neural networks , 2020, Nature Machine Intelligence.
[34] Silvio Savarese,et al. Generalizing to Unseen Domains via Adversarial Data Augmentation , 2018, NeurIPS.
[35] Judy Hoffman,et al. Learning to Balance Specificity and Invariance for In and Out of Domain Generalization , 2020, ECCV.
[36] Aapo Hyvarinen,et al. Hidden Markov Nonlinear ICA: Unsupervised Learning from Nonstationary Time Series , 2020, UAI.
[37] Luke Zettlemoyer,et al. Don’t Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases , 2019, EMNLP.
[38] Anton van den Hengel,et al. Unshuffling Data for Improved Generalization in Visual Question Answering , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[39] Alexei A. Efros,et al. Unbiased look at dataset bias , 2011, CVPR 2011.
[40] Matthias Bethge,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness , 2018, ICLR.
[41] Xi Peng,et al. Learning to Learn Single Domain Generalization , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[42] David Lopez-Paz,et al. Invariant Risk Minimization , 2019, ArXiv.
[43] E. Bareinboim,et al. On Pearl’s Hierarchy and the Foundations of Causal Inference , 2022 .
[44] Yonatan Belinkov,et al. Learning from others' mistakes: Avoiding dataset biases without modeling them , 2020, ICLR.
[45] Yongxin Yang,et al. Episodic Training for Domain Generalization , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[46] Jinwoo Shin,et al. Learning from Failure: Training Debiased Classifier from Biased Classifier , 2020, ArXiv.
[47] Anton van den Hengel,et al. On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law , 2020, NeurIPS.
[48] Pasquale Minervini,et al. There is Strength in Numbers: Avoiding the Hypothesis-Only Bias in Natural Language Inference via Ensemble Adversarial Training , 2020, EMNLP.
[49] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2018, ICLR.
[50] Aaron C. Courville,et al. Gradient Starvation: A Learning Proclivity in Neural Networks , 2020, NeurIPS.
[51] Yang Yu,et al. Diversity Regularized Machine , 2011, IJCAI.
[52] Murray Shanahan,et al. Learning Diverse Representations for Fast Adaptation to Distribution Shift , 2020, ArXiv.
[53] Fabio Maria Carlucci,et al. Domain Generalization by Solving Jigsaw Puzzles , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[54] Suvrit Sra,et al. Diversity Networks: Neural Network Compression Using Determinantal Point Processes , 2015, 1511.05077.
[55] Kyle Gorman,et al. We Need to Talk about Standard Splits , 2019, ACL.
[56] Tom M. Mitchell,et al. The Need for Biases in Learning Generalizations , 2007 .
[57] Ali Farhadi,et al. Situation Recognition: Visual Semantic Role Labeling for Image Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[58] Jonas Peters,et al. Causal inference by using invariant prediction: identification and confidence intervals , 2015, 1501.01332.
[59] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[60] Razvan Pascanu,et al. Adapting Auxiliary Losses Using Gradient Similarity , 2018, ArXiv.
[61] D. Ruppert. The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2004 .
[62] Mengjie Zhang,et al. Domain Generalization for Object Recognition with Multi-task Autoencoders , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[63] Matthias Bethge,et al. Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding , 2020, ICLR.
[64] Earl T. Barr,et al. Perturbation Validation: A New Heuristic to Validate Machine Learning Models , 2019, 1905.10201.
[65] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[66] Judy Hoffman,et al. Robust Learning with Jacobian Regularization , 2019, ArXiv.
[67] Christina Heinze-Deml,et al. Conditional variance penalties and domain shift robustness , 2017, Machine Learning.
[68] Isabelle Guyon,et al. An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..
[69] Alexander Binder,et al. Unmasking Clever Hans predictors and assessing what machines really learn , 2019, Nature Communications.
[70] Sahil Singla,et al. Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning , 2021, FAccT.
[71] Anton van den Hengel,et al. Learning What Makes a Difference from Counterfactual Examples and Gradient Supervision , 2020, ECCV.
[72] Iryna Gurevych,et al. Mind the Trade-off: Debiasing NLU Models without Degrading the In-distribution Performance , 2020, ACL.
[73] Ryota Tomioka,et al. In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning , 2014, ICLR.
[74] Bernhard Schölkopf,et al. Discovering Causal Signals in Images , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[75] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[76] Yongxin Yang,et al. Deeper, Broader and Artier Domain Generalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[77] Reut Tsarfaty,et al. Evaluating NLP Models via Contrast Sets , 2020, ArXiv.
[78] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[79] Razvan C. Bunescu,et al. Training Ensembles to Detect Adversarial Examples , 2017, ArXiv.
[80] Aapo Hyvärinen,et al. Nonlinear ICA of Temporally Dependent Stationary Sources , 2017, AISTATS.
[81] Tatsuya Harada,et al. Domain Generalization Using a Mixture of Multiple Latent Domains , 2019, AAAI.
[82] Qi Wu,et al. Visual Question Answering: A Tutorial , 2017, IEEE Signal Processing Magazine.
[83] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[84] Finale Doshi-Velez,et al. Ensembles of Locally Independent Prediction Models , 2020, AAAI.
[85] Zhenguo Li,et al. DecAug: Out-of-Distribution Generalization via Decomposed Feature Representation and Semantic Augmentation , 2020, AAAI.
[86] Andrew Slavin Ross,et al. Learning Qualitatively Diverse and Interpretable Rules for Classification , 2018, ArXiv.
[87] Ning Chen,et al. Improving Adversarial Robustness via Promoting Ensemble Diversity , 2019, ICML.