Ahff-Net: Adaptive Hierarchical Feature Fusion Network For Image Inpainting

Generation-based image inpainting methods can capture semantic features but fail to generate consistent details and high image quality results due to highly abstract feature learning and the instability of GAN training. Current methods try to overcome these disadvantages but they either need additional marginal maps or are not suitable for different shapes of occlusion. In this paper, we introduce an adaptive hierarchical feature fusion network (AHFF-Net). Without additional maps, our method can obtain consistent edges and high-quality results with different occlusions. Specifically, to guarantee the consistency of low-level features, our hierarchical fusion generator captures and aggregates multi-scale and multi-level context features. To get the high-quality results, the conditional self-supervised discriminator pay more attention to the unknown area by conditional GAN loss and stabilize the training process by conditional rotation loss. The proposed network achieves the state-of-the-art consistently on the Paris StreetView and Places365-Standard datasets with three shapes of masks.

[1]  Guillermo Sapiro,et al.  Image inpainting , 2000, SIGGRAPH.

[2]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[3]  Rui Zhang,et al.  Harmonic Unpaired Image-to-image Translation , 2019, ICLR.

[4]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[5]  Sheng Tang,et al.  Style Separation and Synthesis via Generative Adversarial Networks , 2018, ACM Multimedia.

[6]  Yi Wang,et al.  Image Inpainting via Generative Multi-column Convolutional Neural Networks , 2018, NeurIPS.

[7]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[8]  Wei Xiong,et al.  Foreground-Aware Image Inpainting , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Ting-Chun Wang,et al.  Image Inpainting for Irregular Holes Using Partial Convolutions , 2018, ECCV.

[10]  Shiguang Shan,et al.  Shift-Net: Image Inpainting via Deep Feature Rearrangement , 2018, ECCV.

[11]  Raymond Y. K. Lau,et al.  Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[12]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[13]  Adam Finkelstein,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, SIGGRAPH 2009.

[14]  Thomas S. Huang,et al.  Free-Form Image Inpainting With Gated Convolution , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[15]  Mehran Ebrahimi,et al.  EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning , 2019, ArXiv.

[16]  Qin Huang,et al.  SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting , 2018, BMVC.

[17]  Xiaohua Zhai,et al.  Self-Supervised Generative Adversarial Networks , 2018, ArXiv.

[18]  Alexei A. Efros,et al.  What makes Paris look like Paris? , 2015, Commun. ACM.

[19]  Thomas S. Huang,et al.  Generative Image Inpainting with Contextual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[20]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[21]  Bolei Zhou,et al.  Places: A 10 Million Image Database for Scene Recognition , 2018, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  Sheng Tang,et al.  Asymmetric GAN for Unpaired Image-to-Image Translation , 2019, IEEE Transactions on Image Processing.