GAN-based AI Drawing Board for Image Generation and Colorization

We propose a GAN(Generative Adversarial Networks)-based drawing board which takes the semantic (by segmentation) and color tone (by strokes) inputs from users and automatically generates paintings. Our approach is built on a novel and lightweight feature embedding which incorporates the colorization effects into the painting generation process. Unlike the existing GAN-based image generation models which take semantics input, our drawing board has the ability to edit the local colors after generation. Our method samples the color information from users’ strokes as extra input, then feeds it into a GAN model for conditional generation. We enable the creation of pictures or paintings with semantics and color control in real-time.

[1]  Ersin Yumer,et al.  Learning Local Shape Descriptors from Part Correspondences with Multiview Convolutional Networks , 2017, ACM Trans. Graph..

[2]  Dongdong Chen,et al.  Deep exemplar-based colorization , 2018, ACM Trans. Graph..

[3]  Taesung Park,et al.  Semantic Image Synthesis With Spatially-Adaptive Normalization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Tien-Tsin Wong,et al.  Two-stage sketch colorization , 2018, ACM Trans. Graph..

[5]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[6]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).