Domain Generalization via Adversarially Learned Novel Domains

This study focuses on the domain generalization task, which aims to learn a model that generalizes to unseen domains by utilizing multiple training domains. More specifically, we follow the idea of adversarial data augmentation, which aims to synthesize and augment training data with “hard” domains to improve the model’s domain generalization ability. However, previous studies augmented training data only with samples similar to the training data, resulting in limited generalization ability. To alleviate this issue, we propose a novel adversarial data augmentation method, termed GADA (generative adversarial domain augmentation), which employs an image-to-image translation model to obtain a distribution of novel domains that are semantically different from the training domains, and, at the same time, hard to classify. Evaluation and further analysis suggest that GADA fits our expectation; adversarial data augmentation with semantically different samples leads to better domain generalization performance.

[1]  Youhei Akimoto,et al.  Domain Generalization Via Adversarially Learned Novel Domains , 2022, 2022 IEEE International Conference on Multimedia and Expo (ICME).

[2]  Michael Milford,et al.  Zero-Shot Day-Night Domain Adaptation with a Physics Prior , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[3]  Timothy M. Hospedales,et al.  Learning to Generate Novel Domains for Domain Generalization , 2020, ECCV.

[4]  David Lopez-Paz,et al.  In Search of Lost Domain Generalization , 2020, ICLR.

[5]  Tao Xiang,et al.  Deep Domain-Adversarial Image Generation for Domain Generalisation , 2020, AAAI.

[6]  Jianmin Jiang,et al.  Conditional Coupled Generative Adversarial Networks for Zero-Shot Domain Adaptation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[7]  Pong C. Yuen,et al.  Multi-Adversarial Discriminative Deep Domain Generalization for Face Presentation Attack Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Thomas G. Dietterich,et al.  Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2019, ICLR.

[9]  Fabio Maria Carlucci,et al.  Domain Generalization by Solving Jigsaw Puzzles , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Masashi Sugiyama,et al.  Zero-shot Domain Adaptation Based on Attribute Information , 2019, ACML.

[11]  L. Gool,et al.  DLOW: Domain Flow for Adaptation and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  D. Tao,et al.  Deep Domain Generalization via Conditional Invariant Adversarial Networks , 2018, ECCV.

[13]  Yu-Chiang Frank Wang,et al.  A Unified Feature Disentangler for Multi-Domain Image Translation and Manipulation , 2018, NeurIPS.

[14]  Silvio Savarese,et al.  Generalizing to Unseen Domains via Adversarial Data Augmentation , 2018, NeurIPS.

[15]  Sunita Sarawagi,et al.  Generalizing Across Domains via Cross-Gradient Training , 2018, ICLR.

[16]  Christina Heinze-Deml,et al.  Conditional variance penalties and domain shift robustness , 2017, Machine Learning.

[17]  Vincent Dumoulin,et al.  Generative Adversarial Networks: An Overview , 2017, 1710.07035.

[18]  Ziyan Wu,et al.  Zero-Shot Deep Domain Adaptation , 2017, ECCV.

[19]  Mohamed Yaseen Jabarulla,et al.  Evolving fusion-based visibility restoration model for hazy remote sensing images using dynamic differential evolution , 2022, IEEE Transactions on Geoscience and Remote Sensing.

[20]  Y. Wang,et al.  Adversarial Teacher-Student Representation Learning for Domain Generalization , 2021, NeurIPS.

[21]  MarchandMario,et al.  Domain-adversarial training of neural networks , 2016 .

[22]  Andrew Y. Ng,et al.  Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .