Literature Review of Generative models for Image-to-Image translation problems

In recent years, data-driven (image based) methodologies like deep learning and computer vision have made computers immensely accurate in terms of identifying features inside images. Research in this area has given way to a relatively new set of deep learning models known as Generative models which generate images alongside identifying features inside them. These models, particularly conditional generative adversarial networks (CGANs), conditional variational autoencoders (CVAE) and generative stochastic networks (GSN) have become popular as they are able to translate images from one setting to another while keeping the structure of generated images aligned with the input images. In this paper, we review the work that has been done using these models to the area of web design automation which needs to be considered during the development phase. We also try to identify the benefits of implementing these models based on architectural features while keeping in view the different problem scenarios. Finally, some key challenges in solving such image-to-image translation problems has been mentioned.