Image Completion with Discriminator Guided Context Encoder

Image completion or inpainting is a technique that is used for reconstruction of damaged or distorted regions in an image. In this paper, a new convolutional neural network model is presented for image completion. The proposed method is based on an auto-encoder and a Generative Adversarial Network structure. To succeed at this task and produce a plausible output for the damaged or distorted region(s), the auto-encoder part of the network needs to understand the content of the entire image. On the other hand, discriminators that are used in the proposed network are responsible for deciding whether or not the inpainted output has the expected quality. The general discriminator looks at the entire image to evaluate if it is consistent as a whole, while the local discriminator looks only at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained so that the discriminator networks conclude that the inpainted image is as real as the original. This approach is also aimed to reconstruct images regardless of where the damaged regions are. We compared our proposed method with two other approaches, and the results show that it performs better especially on images with low-texture.

[1]  Sung Yong Shin,et al.  On pixel-based texture synthesis by non-parametric sampling , 2006, Comput. Graph..

[2]  Christine Guillemot,et al.  Image Inpainting : Overview and Recent Advances , 2014, IEEE Signal Processing Magazine.

[3]  Andrew Zisserman,et al.  Get Out of my Picture! Internet-based Inpainting , 2009, BMVC.

[4]  Bolei Zhou,et al.  Places: An Image Database for Deep Scene Understanding , 2016, ArXiv.

[5]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[6]  Guillermo Sapiro,et al.  Image inpainting , 2000, SIGGRAPH.

[7]  Alexander Wong,et al.  A nonlocal-means approach to exemplar-based inpainting , 2008, 2008 15th IEEE International Conference on Image Processing.

[8]  Eli Shechtman,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, ACM Trans. Graph..

[9]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.

[10]  Eli Shechtman,et al.  Image melding , 2012, ACM Trans. Graph..

[11]  Guillermo Sapiro,et al.  Simultaneous structure and texture image inpainting , 2003, IEEE Trans. Image Process..

[12]  Chuan Li,et al.  Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Alexei A. Efros,et al.  What makes Paris look like Paris? , 2015, Commun. ACM.

[15]  Geoffrey E. Hinton,et al.  Modeling Natural Images Using Gated MRFs , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  David Tschumperlé,et al.  A smarter exemplar-based inpainting algorithm using local and global heuristics for more geometric coherence , 2014, 2014 IEEE International Conference on Image Processing (ICIP).

[17]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Enhong Chen,et al.  Image Denoising and Inpainting with Deep Neural Networks , 2012, NIPS.