GCM-Net: Towards Effective Global Context Modeling for Image Inpainting

Deep learning based inpainting methods have obtained promising performance for image restoration, however current image inpainting methods still tend to produce unreasonable structures and blurry textures when processing the damaged images with heavy corruptions. In this paper, we propose a new image inpainting method termed Global Context Modeling Network (GCM-Net). By capturing the global contextual information, GCM-Net can potentially improve the performance of recovering the missing region in the damaged images with irregular masks. To be specific, we first use four convolution layers to extract the shadow features. Then, we design a progressive multi-scale fusion block termed PMSFB to extract and fuse the multi-scale features for obtaining local features. Besides, a dense context extraction (DCE) module is also designed to aggregate the local features extracted by PMSFBs. To improve the information flow, a channel attention guided residual learning module is deployed in both the DCE and PMSFB, which can reweight the learned residual features and refine the extracted information. To capture more global contextual information and enhance the representation ability, a coordinate context attention (CCA) based module is also presented. Finally, the extracted features with rich information are decoded as the image inpainting result. Extensive results on the Paris Street View, Places2 and CelebA-HQ datasets demonstrate that our method can better recover the structures and textures, and deliver significant improvements, compared with some related inpainting methods.

[1]  Tao Yu,et al.  Region Normalization for Image Inpainting , 2020, AAAI.

[2]  Sen Liu,et al.  Progressive Image Inpainting with Full-Resolution Residual Network , 2019, ACM Multimedia.

[3]  Tony F. Chan,et al.  Nontexture Inpainting by Curvature-Driven Diffusions , 2001, J. Vis. Commun. Image Represent..

[4]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Bo Du,et al.  MUSICAL: Multi-Scale Image Contextual Attention Learning for Inpainting , 2019, IJCAI.

[6]  Ugo Boscain,et al.  Highly Corrupted Image Inpainting Through Hypoelliptic Diffusion , 2018, Journal of Mathematical Imaging and Vision.

[7]  Shuicheng Yan,et al.  Data-Driven Single Image Deraining: A Comprehensive Review and New Perspectives , 2021 .

[8]  Wei Xiong,et al.  Foreground-Aware Image Inpainting , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  David Tschumperlé,et al.  Depth-Guided Disocclusion Inpainting of Synthesized RGB-D Images , 2017, IEEE Transactions on Image Processing.

[10]  Thomas S. Huang,et al.  Free-Form Image Inpainting With Gated Convolution , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[11]  Richang Hong,et al.  A Coarse-to-Fine Multi-stream Hybrid Deraining Network for Single Image Deraining , 2019, 2019 IEEE International Conference on Data Mining (ICDM).

[12]  Dacheng Tao,et al.  Recurrent Feature Reasoning for Image Inpainting , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Jianfei Cai,et al.  Pluralistic Image Completion , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Jian Sun,et al.  Statistics of Patch Offsets for Image Completion , 2012, ECCV.

[15]  Yun Fu,et al.  Image Super-Resolution Using Very Deep Residual Channel Attention Networks , 2018, ECCV.

[16]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[17]  Michal Irani,et al.  Needle-Match: Reliable Patch Matching under High Uncertainty , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Minh N. Do,et al.  Semantic Image Inpainting with Deep Generative Models , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Shuicheng Yan,et al.  DerainCycleGAN: Rain Attentive CycleGAN for Single Image Deraining and Rainmaking , 2021, IEEE Transactions on Image Processing.

[20]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Baining Guo,et al.  Learning Pyramid-Context Encoder Network for High-Quality Image Inpainting , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Joachim Weickert,et al.  Diffusion-Based Inpainting for Coding Remote-Sensing Data , 2017, IEEE Geoscience and Remote Sensing Letters.

[23]  Aaron C. Courville,et al.  Generative adversarial networks , 2020 .

[24]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[25]  Hao Li,et al.  High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Wei Huang,et al.  Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations , 2020, ECCV.

[27]  Adam Finkelstein,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, SIGGRAPH 2009.

[28]  Bolei Zhou,et al.  Places: A 10 Million Image Database for Scene Recognition , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[29]  Thomas H. Li,et al.  StructureFlow: Image Inpainting via Structure-Aware Appearance Flow , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[30]  Pengfei Xiong,et al.  Deep Fusion Network for Image Completion , 2019, ACM Multimedia.

[31]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[32]  Mehran Ebrahimi,et al.  EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning , 2019, ArXiv.

[33]  Hiroshi Ishikawa,et al.  Globally and locally consistent image completion , 2017, ACM Trans. Graph..

[34]  Serge J. Belongie,et al.  Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[35]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  Wangmeng Zuo,et al.  Image Inpainting With Learnable Bidirectional Attention Maps , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[37]  Bin Jiang,et al.  Coherent Semantic Attention for Image Inpainting , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[38]  Chaoran Cui,et al.  Self-attention driven adversarial similarity learning network , 2020, Pattern Recognit..

[39]  Tony F. Chan,et al.  Mathematical Models for Local Nontexture Inpaintings , 2002, SIAM J. Appl. Math..

[40]  Sung-Jea Ko,et al.  PEPSI++: Fast and Lightweight Network for Image Inpainting , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[41]  Bo Du,et al.  Progressive Reconstruction of Visual Structure for Image Inpainting , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[42]  Yang Wang,et al.  Dense Residual Network: Enhancing Global Dense Feature Flow for Character Recognition , 2020 .

[43]  Richang Hong,et al.  Single-shot Semantic Image Inpainting with Densely Connected Generative Networks , 2019, ACM Multimedia.

[44]  Djemel Ziou,et al.  Image Quality Metrics: PSNR vs. SSIM , 2010, 2010 20th International Conference on Pattern Recognition.

[45]  Ran He,et al.  Geometry-Aware Face Completion and Editing , 2018, AAAI.

[46]  Ting-Chun Wang,et al.  Image Inpainting for Irregular Holes Using Partial Convolutions , 2018, ECCV.

[47]  Thomas S. Huang,et al.  Generative Image Inpainting with Contextual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[48]  Tianqi Chen,et al.  Empirical Evaluation of Rectified Activations in Convolutional Network , 2015, ArXiv.

[49]  Patrick Pérez,et al.  Region filling and object removal by exemplar-based image inpainting , 2004, IEEE Transactions on Image Processing.

[50]  Alexei A. Efros,et al.  What makes Paris look like Paris? , 2015, Commun. ACM.