Damage Sensitive and Original Restoration Driven Thanka Mural Inpainting

Thangka murals are an important part of the cultural heritage of Tibet, but many precious murals were damaged during the Tibetan history. Three reasons cause existing methods to fail to provide a feasible solution for Thanka mural restoration: 1) damaged Thanka murals contain multiple large irregular broken areas; 2) damaged Thanka murals should be repaired with the original content instead of imaginary content; and 3) there is no large Thanka dataset for training. We propose a damage sensitive and original restoration driven (DSORD) Thanka inpainting method to resolve this problem. The proposed method consists of two parts. In the first part, instead of using existing arbitrary mask sets, we propose a novel mask generation method to simulate real damage of the Thanka murals, both masked Thanka and the mask generated by our method are inputted into a partial convolution neural network for training, which makes our model familiar with a variety of irregular simulated damages; and in the second part, we propose a 2-phase original-restoration-driven learning method to guide the model to restore the original content of the Thanka mural. Experiments on both simulated and real damage demonstrated that our DSORD approach performed well on a small dataset (N = 3000), generated more realistic content, and restored better the damaged Thanka murals.

[1]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[2]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Adam Finkelstein,et al.  PatchMatch: a randomized correspondence algorithm for structural image editing , 2009, SIGGRAPH 2009.

[4]  Xiaolei Ma,et al.  A Precise-Mask-Based Method for Enhanced Image Inpainting , 2016 .

[5]  Wei Xiong,et al.  Foreground-Aware Image Inpainting , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Thomas S. Huang,et al.  Free-Form Image Inpainting With Gated Convolution , 2018, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[8]  Mehran Ebrahimi,et al.  EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning , 2019, ArXiv.

[9]  Manuel Menezes de Oliveira Neto,et al.  Fast Digital Image Inpainting , 2001, VIIP.

[10]  Winston H. Hsu,et al.  Free-Form Video Inpainting With 3D Gated Convolution and Temporal PatchGAN , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[11]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.

[12]  Jianjun Qian,et al.  Research Outline and Progress of Digital Protection on Thangka , 2012 .

[13]  Thomas S. Huang,et al.  Generative Image Inpainting with Contextual Attention , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[14]  Hiroshi Ishikawa,et al.  Globally and locally consistent image completion , 2017, ACM Trans. Graph..

[15]  Qingquan Li,et al.  Inpainting of Dunhuang Murals by Sparsely Modeling the Texture Similarity and Structure Continuity , 2019, ACM Journal on Computing and Cultural Heritage.

[16]  Yi Wang,et al.  Image Inpainting via Generative Multi-column Convolutional Neural Networks , 2018, NeurIPS.

[17]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[18]  Jaakko Lehtinen,et al.  Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.

[19]  Shaodi You,et al.  End-to-End Partial Convolutions Neural Networks for Dunhuang Grottoes Wall-Painting Restoration , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).