DAGAN: A Domain-Aware Method for Image-to-Image Translations
暂无分享,去创建一个
The image-to-image translation method aims to learn inter-domain mappings from paired/unpaired data. Although this technique has been widely used for visual predication tasks—such as classification and image segmentation—and achieved great results, we still failed to perform flexible translations when attempting to learn different mappings, especially for images containing multiple instances. To tackle this problem, we propose a generative framework DAGAN (Domain-aware Generative Adversarial etwork) that enables domains to learn diverse mapping relationships. We assumed that an image is composed with background and instance domain and then fed them into different translation networks. Lastly, we integrated the translated domains into a complete image with smoothed labels to maintain realism. We examined the instance-aware framework on datasets generated by YOLO and confirmed that this is capable of generating images of equal or better diversity compared to current translation models.
[1] Max A. Viergever,et al. Generative Adversarial Networks for Noise Reduction in Low-Dose CT , 2017, IEEE Transactions on Medical Imaging.