Harnessing Out-Of-Distribution Examples via Augmenting Content and Style
暂无分享,去创建一个
Mingming Gong | Tongliang Liu | Bo Han | Xiaobo Xia | Zhuo Huang | Li Shen | Chen Gong
[1] Xiankai Lu,et al. SAFER-STUDENT for Safe Deep Semi-Supervised Learning With Unseen-Class Unlabeled Data , 2024, IEEE Transactions on Knowledge and Data Engineering.
[2] Tongliang Liu,et al. Robust Generalization Against Photon-Limited Corruptions via Worst-Case Sharpness Minimization , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Tongliang Liu,et al. Out-of-distribution Detection with Implicit Outlier Transformation , 2023, ICLR.
[4] Tongliang Liu,et al. Watermarking for Out-of-distribution Detection , 2022, NeurIPS.
[5] Tongliang Liu,et al. Improving Adversarial Robustness via Mutual Information Estimation , 2022, ICML.
[6] Alexander J. Smola,et al. Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition , 2022, ICML.
[7] Yilong Yin,et al. Not All Parameters Should Be Treated Equally: Deep Safe Semi-supervised Learning under Class Distribution Mismatch , 2022, AAAI.
[8] Lei Feng,et al. Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets , 2022, ICML.
[9] Mingming Gong,et al. MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models , 2022, NeurIPS.
[10] Yixuan Li,et al. Mitigating Neural Network Overconfidence with Logit Normalization , 2022, ICML.
[11] Tongliang Liu,et al. Modeling Adversarial Noise for Adversarial Training , 2021, ICML.
[12] Mingming Gong,et al. Instance-dependent Label-noise Learning under a Structural Causal Model , 2021, NeurIPS.
[13] Liang Lin,et al. Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for Open-Set Semi-Supervised Learning , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[14] Jaegul Choo,et al. Learning Debiased Representation via Disentangled Feature Augmentation , 2021, NeurIPS.
[15] Masashi Sugiyama,et al. Probabilistic Margins for Instance Reweighting in Adversarial Training , 2021, NeurIPS.
[16] B. Schölkopf,et al. CausalAdv: Adversarial Robustness through the Lens of Causality , 2021, 2106.06196.
[17] Xinbo Gao,et al. Towards Defending against Adversarial Examples via Attack-Invariant Features , 2021, ICML.
[18] Luigi Gresele,et al. Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style , 2021, NeurIPS.
[19] Yi Yang,et al. Domain Consensus Clustering for Universal Domain Adaptation , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Mingming Gong,et al. Instance Correction for Learning with Open-set Noisy Labels , 2021, ArXiv.
[21] Shu Kong,et al. OpenGAN: Open-Set Recognition via Open Data Generation , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[22] Hongxia Jin,et al. Negative Data Augmentation , 2021, ICLR.
[23] Jure Leskovec,et al. Open-World Semi-Supervised Learning , 2021, ICLR.
[24] Jian Yang,et al. They are Not Completely Useless: Towards Recycling Transferable Unlabeled Data for Class-Mismatched Semi-Supervised Learning , 2020, IEEE Transactions on Multimedia.
[25] Tie-Yan Liu,et al. Learning Causal Semantic Representation for Out-of-Distribution Prediction , 2020, NeurIPS.
[26] Charles Blundell,et al. Representation Learning via Invariant Causal Mechanisms , 2020, ICLR.
[27] Yingwei Li,et al. Shape-Texture Debiased Neural Network Training , 2020, ICLR.
[28] Yixuan Li,et al. Energy-based Out-of-distribution Detection , 2020, NeurIPS.
[29] P. Boulter,et al. Category , 2020, IST International Surface Technology.
[30] Go Irie,et al. Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning , 2020, ECCV.
[31] Zhi-Hua Zhou,et al. Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data , 2020, ICML.
[32] Jasper Snoek,et al. Revisiting One-vs-All Classifiers for Predictive Uncertainty and Out-of-Distribution Detection in Neural Networks , 2020, ArXiv.
[33] Shaogang Gong,et al. Semi-Supervised Learning under Class Distribution Mismatch , 2020, AAAI.
[34] Kate Saenko,et al. Universal Domain Adaptation through Self Supervision , 2020, NeurIPS.
[35] David Berthelot,et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence , 2020, NeurIPS.
[36] Simon Kornblith,et al. The Origins and Prevalence of Texture Bias in Convolutional Neural Networks , 2019, NeurIPS.
[37] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[38] David Lopez-Paz,et al. Invariant Risk Minimization , 2019, ArXiv.
[39] Jimeng Sun,et al. Causal Regularization , 2019, NeurIPS.
[40] Jasper Snoek,et al. Likelihood Ratios for Out-of-Distribution Detection , 2019, NeurIPS.
[41] Hong Liu,et al. Separate to Adapt: Open Set Domain Adaptation via Progressive Separation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Michael I. Jordan,et al. Universal Domain Adaptation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[43] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[44] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[45] Giulia Battistoni,et al. Causality , 2019, Mind and the Present.
[46] Jianmin Wang,et al. Learning to Transfer Examples for Partial Domain Adaptation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[47] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2019, ICLR.
[48] Matthias Hein,et al. Why ReLU Networks Yield High-Confidence Predictions Far Away From the Training Data and How to Mitigate the Problem , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[49] Stefan Bauer,et al. Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness , 2018, ICML.
[50] Weng-Keen Wong,et al. Open Set Learning with Counterfactual Images , 2018, ECCV.
[51] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[52] Kate Saenko,et al. VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[53] Alex ChiChung Kot,et al. Domain Generalization with Adversarial Feature Learning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[54] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[55] Quoc V. Le,et al. AutoAugment: Learning Augmentation Policies from Data , 2018, ArXiv.
[56] Silvio Savarese,et al. Generalizing to Unseen Domains via Adversarial Data Augmentation , 2018, NeurIPS.
[57] Tatsuya Harada,et al. Open Set Domain Adaptation by Backpropagation , 2018, ECCV.
[58] Sunita Sarawagi,et al. Generalizing Across Domains via Cross-Gradient Training , 2018, ICLR.
[59] Bo Li,et al. Causally Regularized Learning with Agnostic Data Selection Bias , 2017, ACM Multimedia.
[60] Rahil Garnavi,et al. Generative OpenMax for Multi-Class Open Set Classification , 2017, BMVC.
[61] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[62] R. Srikant,et al. Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks , 2017, ICLR.
[63] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[64] Trevor Darrell,et al. Adversarial Discriminative Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[65] Kevin Gimpel,et al. A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , 2016, ICLR.
[66] Bernhard Schölkopf,et al. Domain Adaptation with Conditional Transferable Components , 2016, ICML.
[67] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[68] J. Pearl,et al. Causal Inference in Statistics: A Primer , 2016 .
[69] David M. Blei,et al. Variational Inference: A Review for Statisticians , 2016, ArXiv.
[70] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[71] François Laviolette,et al. Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..
[72] Victor S. Lempitsky,et al. Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.
[73] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[74] Iasonas Kokkinos,et al. Describing Textures in the Wild , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[75] Anderson Rocha,et al. Toward Open Set Recognition , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[76] Bernhard Schölkopf,et al. Domain Adaptation under Target and Conditional Shift , 2013, ICML.
[77] Bernhard Schölkopf,et al. Robust Learning via Cause-Effect Models , 2011, ArXiv.
[78] Razvan Pascanu,et al. Deep Learners Benefit More from Out-of-Distribution Examples , 2011, AISTATS.
[79] Qiang Yang,et al. A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.
[80] Trevor Darrell,et al. Adapting Visual Category Models to New Domains , 2010, ECCV.
[81] G. Griffin,et al. Caltech-256 Object Category Dataset , 2007 .
[82] Andrew Zisserman,et al. A Visual Vocabulary for Flower Classification , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).
[83] Mingming Gong,et al. Mosaic Representation Learning for Self-supervised Visual Pre-training , 2023, ICLR.
[84] B. Wessler,et al. Fix-A-Step: Effective Semi-supervised Learning from Uncurated Unlabeled Sets , 2022, ArXiv.
[85] Yuhuai Wu,et al. Invariant Causal Representation Learning for Out-of-Distribution Generalization , 2022, ICLR.
[86] Tongliang Liu,et al. Pluralistic Image Completion with Gaussian Mixture Models , 2022, NeurIPS.
[87] Jian Yang,et al. Universal Semi-Supervised Learning , 2021, NeurIPS.
[88] Jinwoo Shin,et al. Learning from Failure: De-biasing Classifier from Biased Classifier , 2020, NeurIPS.
[89] Michael I. Jordan,et al. AUTO-ENCODING VARIATIONAL BAYES , 2020 .
[90] Colin Raffel,et al. Realistic Evaluation of Semi-Supervised Learning Algorithms , 2018, ICLR.
[91] Jinsung Yoon,et al. GENERATIVE ADVERSARIAL NETS , 2018 .
[92] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[93] Eric T. Nalisnick,et al. Under review as a conference paper at ICLR 2016 , 2015 .
[94] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[95] Fei-Fei Li,et al. Novel Dataset for Fine-Grained Image Categorization : Stanford Dogs , 2012 .
[96] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[97] Léon Bottou,et al. Large-Scale Machine Learning with Stochastic Gradient Descent , 2010, COMPSTAT.
[98] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[99] Peter Chan. Conference Paper , 2009 .
[100] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[101] Jakub M. Tomczak,et al. Selecting Data Augmentation for Simulating Interventions , 2021, ICML.