Image Completion Based on Gans with a New Loss Function

Recently, many approaches based on deep learning have demonstrated amazing capabilities in varieties of challenging image tasks, such as image classification, object detection, semantic and instance segmentation, and so on. These methods are more capable of extracting deeper features than traditional methods, which is critical for different kinds of image tasks. Similarly, these methods are gradually applied to the work of image completion of natural images. In consideration of the fact that most of the current methods would lead to blurring and fake results, we propose an image completion method based on the generation adversarial network. We use the network structure of the encoder and decoder to obtain the high-level feature information of the image and generate reasonable pixel values to fill the missing regions. Besides, we construct a new joint loss function based on SSIM evaluation indicators, which can retain the similarity between two images as much as possible. Our proposed method can keep the completion regions consistent with the surrounding pixels, which makes the images look more realistic. We evaluate on our datasets with our proposed method and compare with other methods in this paper, our results are sharper and more realistic than the previous ones.

[1]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[2]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[3]  Ole Winther,et al.  Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.

[4]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[5]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[6]  Gang Sun,et al.  Squeeze-and-Excitation Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[8]  Tony F. Chan,et al.  Mathematical Models for Local Nontexture Inpaintings , 2002, SIAM J. Appl. Math..

[9]  Adam Finkelstein,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, SIGGRAPH 2009.

[10]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[12]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[13]  G. Bjontegaard,et al.  Calculation of Average PSNR Differences between RD-curves , 2001 .

[14]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Jung-Woo Ha,et al.  StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.