Reference-guided structure-aware deep sketch colorization for cartoons

Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading. During colorization, the artist usually takes an existing cartoon image as color guidance, particularly when colorizing related characters or an animation sequence. Reference-guided colorization is more intuitive than colorization with other hints, such as color points or scribbles, or text-based hints. Unfortunately, reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading. In this paper, we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image. Our framework contains a color style extractor to extract the color feature from a color image, a colorization network to generate multi-scale output images by combining a sketch and a color feature, and a multi-scale discriminator to improve the reality of the output image. Extensive qualitative and quantitative evaluations show that our method outperforms existing methods, providing both superior visual quality and style reference consistency in the task of reference-based colorization.

[1]  Li Fei-Fei,et al.  Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.

[2]  Klaus Mueller,et al.  Transferring color to greyscale images , 2002, ACM Trans. Graph..

[3]  Jan Kautz,et al.  Multimodal Unsupervised Image-to-Image Translation , 2018, ECCV.

[4]  D. Dowson,et al.  The Fréchet distance between multivariate normal distributions , 1982 .

[5]  Deepu Rajan,et al.  Image colorization using similar images , 2012, ACM Multimedia.

[6]  Stephen Lin,et al.  Semantic colorization with internet images , 2011, ACM Trans. Graph..

[7]  Renato Pajarola,et al.  SymmSketch: Creating symmetric 3D free-form shapes from 2D sketches , 2015, Computational Visual Media.

[8]  David Salesin,et al.  Image Analogies , 2001, SIGGRAPH.

[9]  Aurélie Bugeau,et al.  Variational Exemplar-Based Image Colorization , 2014, IEEE Transactions on Image Processing.

[10]  Hideki Todo,et al.  Estimating reflectance and shape of objects from a single cartoon-shaded image , 2016, Computational Visual Media.

[11]  Sai-Keung Wong,et al.  Adversarial Colorization of Icons Based on Contour and Color Conditions , 2019, ACM Multimedia.

[12]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[13]  Hiroshi Ishikawa,et al.  Globally and locally consistent image completion , 2017, ACM Trans. Graph..

[14]  Aaron C. Courville,et al.  Generative adversarial networks , 2020 .

[15]  Stephen Lin,et al.  Intrinsic colorization , 2008, ACM Trans. Graph..

[16]  Hui Huang,et al.  Image recoloring using geodesic distance based color harmonization , 2015, Computational Visual Media.

[17]  Tien-Tsin Wong,et al.  Two-stage sketch colorization , 2018, ACM Trans. Graph..

[18]  Maneesh Kumar Singh,et al.  DRIT++: Diverse Image-to-Image Translation via Disentangled Representations , 2019, International Journal of Computer Vision.

[19]  Dongdong Chen,et al.  Deep exemplar-based colorization , 2018, ACM Trans. Graph..

[20]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.