A deep network architecture for image inpainting

A number of artworks have been damaged to some extent, over time, which greatly affect their visual quality. Therefore, it's a very valuable and meaningful work to repair them. We proposed a deep network architecture — Image Inpainting Conditional Generative Adversarial Network (II-CGAN) to address this problem. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between the damaged and repaired image detail layers from data. Since the intact image corresponding to the real-world damaged image is not available, we synthesize images with lost blocks for training. To minimize the lost of information and ensure better visual quality, a new refined network architecture is introduced. We made a thorough evaluation of the Generator of increased depth(22 layers) using an architecture with the units consisting of 3 × 3 and 4 χ 4 convolution filters, and the Discriminator with small(3 χ 3) convolution kernel used instead of 4 χ 4 in all convolution filters. Experiments results prove that the method in this paper achieves better objective and subjective performance.

[1]  Yann LeCun,et al.  Energy-based Generative Adversarial Network , 2016, ICLR.

[2]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[3]  Eli Shechtman,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, ACM Trans. Graph..

[4]  Minh N. Do,et al.  Semantic Image Inpainting with Perceptual and Contextual Losses , 2016, ArXiv.

[5]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[6]  Gabriele Facciolo,et al.  Variational Framework for Non-Local Inpainting , 2015, Image Process. Line.

[7]  Yu-Bin Yang,et al.  Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections , 2016, NIPS.

[8]  David Zhang,et al.  FSIM: A Feature Similarity Index for Image Quality Assessment , 2011, IEEE Transactions on Image Processing.

[9]  Daniel Cohen-Or,et al.  Fragment-based image completion , 2003, ACM Trans. Graph..

[10]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[11]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Xueyi Ye,et al.  A new image multi-level-inpainting method , 2010, 2010 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE).

[13]  Paul Harrison,et al.  A Non-Hierarchical Procedure for Re-Synthesis of Complex Textures , 2001, WSCG.

[14]  Rachid Deriche,et al.  Vector-valued image regularization with PDEs: a common framework for different applications , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Guillermo Sapiro,et al.  Image inpainting , 2000, SIGGRAPH.

[16]  Sung Yong Shin,et al.  On pixel-based texture synthesis by non-parametric sampling , 2006, Comput. Graph..

[17]  Guillermo Sapiro,et al.  Simultaneous structure and texture image inpainting , 2003, IEEE Trans. Image Process..

[18]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.

[19]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.