Learning to create multi-stylized Chinese character fonts by generative adversarial networks

Owing to the complex structure of Chinese characters and the huge number of Chinese characters, it is very challenging and time consuming for artists to design a new font of Chinese characters. Therefore, the generation of Chinese characters and the transformation of font styles have become research hotspots. At present, most of the models on Chinese character transformation cannot generate multiple fonts, and they are not doing well in faking fonts. In this paper, we propose a novel method of Chinese character fonts transformation and generation based on Generative Adversarial Networks. Our model is able to generate multiple fonts at once through font style specifying mechanism and it can generate a new font at the same time if we combine the characteristics of existing fonts.

[1]  Yoshua Bengio,et al.  Drawing and Recognizing Chinese Characters with Recurrent Neural Network , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Hao Jiang,et al.  Automatic Generation of Chinese Calligraphic Writings with Style Imitation , 2009, IEEE Intelligent Systems.

[3]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[4]  Hao Jiang,et al.  Automatic Generation of Personal Chinese Handwriting by Capturing the Characteristics of Personal Handwriting , 2009, IAAI.

[5]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[7]  Hang Su,et al.  Learning to Write Stylized Chinese Characters by Reading a Handful of Examples , 2017, IJCAI.

[8]  Weihong Wang,et al.  Easy generation of personal Chinese handwritten fonts , 2011, 2011 IEEE International Conference on Multimedia and Expo.

[9]  Jeff Donahue,et al.  Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.

[10]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[11]  Trevor Darrell,et al.  Adversarial Discriminative Domain Adaptation , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Jung-Woo Ha,et al.  StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[13]  Rui Zhang,et al.  W-Net: One-Shot Arbitrary-Style Chinese Character Generation with Deep Neural Networks , 2018, ICONIP.

[14]  Jie Chang,et al.  Chinese Typography Transfer , 2017, ArXiv.

[15]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.