StarFont: Enabling Font Completion Based on few Shots Examples

Font design has become an essential part of multimedia. It has the ability to convey the mood and intention of the designer. However, creating a new font for Chinese characters takes a lot of effort because the language contains over 20 thousand character with complex morphological structures. Current font completion methods have many disadvantages. To address this problem, we propose StarFont, a font completion system that can automatically complete a whole font using a few-shot learning method. Our model takes several examples of a new font, learns the design style and applies it to the remaining characters to complete the font. Unlike existing models proposed for font generation, we treat each character not the font as a class and abandon reconstruction loss because the font's ground truth is easier to obtain. Moreover, we combine multiple input images to generate new images, while existing methods use a one-to-one approach. Compared to other deep learning-based font completion methods, our model requires fewer examples of the new font and generates better results. Both qualitative and quantitative methods show that our method is more advanced.

[1]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[2]  Yuke Zhu,et al.  StrokeBank: Automating Personalized Chinese Handwriting Generation , 2014, AAAI.

[3]  Antonio Torralba,et al.  Generating Videos with Scene Dynamics , 2016, NIPS.

[4]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[5]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[6]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[7]  Jaewoo Kang,et al.  Typeface Completion with Generative Adversarial Networks , 2018, ArXiv.

[8]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Jianguo Xiao,et al.  DCFont: an end-to-end deep chinese font generation system , 2017, SIGGRAPH Asia Technical Briefs.

[10]  Fisher Yu,et al.  Scribbler: Controlling Deep Image Synthesis with Sketch and Color , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Sung Yong Shin,et al.  On pixel-based texture synthesis by non-parametric sampling , 2006, Comput. Graph..

[12]  Yann LeCun,et al.  Disentangling factors of variation in deep representation using adversarial training , 2016, NIPS.

[13]  Rob Fergus,et al.  Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.

[14]  Trevor Darrell,et al.  Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Aykut Erdem,et al.  Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts , 2016, ArXiv.

[16]  Bernt Schiele,et al.  Generative Adversarial Text to Image Synthesis , 2016, ICML.

[17]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[18]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[19]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[20]  Jung-Woo Ha,et al.  StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[21]  Jiajun Wu,et al.  Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling , 2016, NIPS.

[22]  David Salesin,et al.  Image Analogies , 2001, SIGGRAPH.

[23]  Léon Bottou,et al.  Wasserstein Generative Adversarial Networks , 2017, ICML.

[24]  Alexei A. Efros,et al.  Generative Visual Manipulation on the Natural Image Manifold , 2016, ECCV.