Multi-Domain Ensembles for Domain Generalization

In this paper, we consider the challenging problem of multi-source zero shot domain generalization (MDG), where labeled training data from multiple source domains are available but with no access to data from the target domain. Many methods have been proposed to address this problem, but surprisingly the naïve solution of pooling all source data together and training a single ERM model is highly competitive. Constructing an ensemble of deep classifiers is a popular approach for building models that are calibrated under challenging distribution shifts. Hence, we propose MulDEns (Multi-Domain Deep Ensembles), a new approach for constructing deep ensembles in multi-domain problems that does not require to construct domain-specific models. Our empirical studies on multiple standard benchmarks show that MulDEns significantly outperforms ERM and existing ensembling solutions for MDG.

[1]  Philip H. S. Torr,et al.  Gradient Matching for Domain Generalization , 2021, ICLR.

[2]  Cuiling Lan,et al.  Generalizing to Unseen Domains: A Survey on Domain Generalization , 2021, IEEE Transactions on Knowledge and Data Engineering.

[3]  George J. Pappas,et al.  Model-Based Domain Generalization , 2021, NeurIPS.

[4]  Pang Wei Koh,et al.  WILDS: A Benchmark of in-the-Wild Distribution Shifts , 2020, ICML.

[5]  Judy Hoffman,et al.  Learning to Balance Specificity and Invariance for In and Out of Domain Generalization , 2020, ECCV.

[6]  Marc Niethammer,et al.  Robust and Generalizable Visual Representation Learning via Random Convolutions , 2020, ICLR.

[7]  Eric P. Xing,et al.  Self-Challenging Improves Cross-Domain Generalization , 2020, ECCV.

[8]  David Lopez-Paz,et al.  In Search of Lost Domain Generalization , 2020, ICLR.

[9]  Donggeun Yoo,et al.  Reducing Domain Gap via Style-Agnostic Networks , 2019, ArXiv.

[10]  Bohyung Han,et al.  Learning to Optimize Domain Specific Normalization for Domain Generalization , 2019, ECCV.

[11]  David Lopez-Paz,et al.  Invariant Risk Minimization , 2019, ArXiv.

[12]  John Langford,et al.  Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds , 2019, ICLR.

[13]  Shaoqun Zeng,et al.  From Detection of Individual Metastases to Classification of Lymph Node Status at the Patient Level: The CAMELYON17 Challenge , 2019, IEEE Transactions on Medical Imaging.

[14]  Yongxin Yang,et al.  Episodic Training for Domain Generalization , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[15]  Taesung Park,et al.  CyCADA: Cycle-Consistent Adversarial Domain Adaptation , 2017, ICML.

[16]  Yongxin Yang,et al.  Deeper, Broader and Artier Domain Generalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[17]  Sethuraman Panchanathan,et al.  Deep Hashing Network for Unsupervised Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Charles Blundell,et al.  Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.

[19]  Kate Saenko,et al.  Deep CORAL: Correlation Alignment for Deep Domain Adaptation , 2016, ECCV Workshops.

[20]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  François Laviolette,et al.  Domain-Adversarial Training of Neural Networks , 2015, J. Mach. Learn. Res..

[22]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[23]  Ye Xu,et al.  Unbiased Metric Learning: On the Utilization of Multiple Datasets and Web Images for Softening Bias , 2013, 2013 IEEE International Conference on Computer Vision.

[24]  Alexei A. Efros,et al.  Unbiased look at dataset bias , 2011, CVPR 2011.

[25]  Vladimir Vapnik,et al.  An overview of statistical learning theory , 1999, IEEE Trans. Neural Networks.

[26]  MarchandMario,et al.  Domain-adversarial training of neural networks , 2016 .