Design of Painting Art Style Rendering System Based on Convolutional Neural Network

Convolutional Neural Network- (CNN-) based GAN models mainly suffer from problems such as data set limitation and rendering efficiency in the segmentation and rendering of painting art. In order to solve these problems, this paper uses the improved cycle generative adversarial network (CycleGAN) to render the current image style. This method replaces the deep residual network (ResNet) of the original network generator with a dense connected convolutional network (DenseNet) and uses the perceptual loss function for adversarial training. The painting art style rendering system built in this paper is based on perceptual adversarial network (PAN) for the improved CycleGAN that suppresses the limitation of the network model on paired samples. The proposed method also improves the quality of the image generated by the artistic style of painting and further improves the stability and speeds up the network convergence speed. Experiments were conducted on the painting art style rendering system based on the proposed model. Experimental results have shown that the image style rendering method based on the perceptual adversarial error to improve the CycleGAN + PAN model can achieve better results. The PSNR value of the generated image is increased by 6.27% on average, and the SSIM values are all increased by about 10%. Therefore, the improved CycleGAN + PAN image painting art style rendering method produces better painting art style images, which has strong application value.

[1]  Wei Jing,et al.  RoboCoDraw: Robotic Avatar Drawing with GAN-based Style Transfer and Time-efficient Path Optimization , 2019, AAAI.

[2]  Xinbo Gao,et al.  Semantic-related image style transfer with dual-consistency loss , 2020, Neurocomputing.

[3]  Xiaojun Chang,et al.  Unity Style Transfer for Person Re-Identification , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  M. Perrot,et al.  CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer , 2020, ECCV Workshops.

[5]  Eric P. Xing,et al.  Unsupervised Text Style Transfer using Language Models as Discriminators , 2018, NeurIPS.

[6]  Toby P. Breckon,et al.  Real-Time Monocular Depth Estimation Using Synthetic Data with Domain Adaptation via Image Style Transfer , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[7]  Jingjing Zhang,et al.  Style transfer for unsupervised domain-adaptive person re-identification , 2021, Neurocomputing.

[8]  Zunlei Feng,et al.  Neural Style Transfer: A Review , 2017, IEEE Transactions on Visualization and Computer Graphics.

[9]  Xinchao Wang,et al.  Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Xin Deng Enhancing Image Quality via Style Transfer for Single Image Super-Resolution , 2018, IEEE Signal Processing Letters.

[11]  Wenbin Cai,et al.  Separating Style and Content for Generalized Style Transfer , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[12]  Yuning Jiang,et al.  Controllable Person Image Synthesis With Attribute-Decomposed GAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Xinjian Chen,et al.  Speckle Noise Reduction for OCT Images Based on Image Style Transfer and Conditional GAN , 2021, IEEE Journal of Biomedical and Health Informatics.

[14]  Shengwu Xiong,et al.  Few-Shot Text Style Transfer via Deep Feature Similarity , 2020, IEEE Transactions on Image Processing.

[15]  Wenjing Wang,et al.  TET-GAN: Text Effects Transfer via Stylization and Destylization , 2019, AAAI.

[16]  Yan Li,et al.  Medical image processing with contextual style transfer , 2020, Hum. centric Comput. Inf. Sci..

[17]  Asha Anoosheh,et al.  Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Xiaokang Yang,et al.  Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer , 2019, IEEE Transactions on Image Processing.

[19]  Pablo Navarrete Michelini,et al.  Artsy-GAN: A style transfer system with improved quality, diversity and performance , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).

[20]  Nuno Vasconcelos,et al.  Rethinking and Improving the Robustness of Image Style Transfer , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Lianwen Jin,et al.  RD-GAN: Few/Zero-Shot Chinese Character Style Transfer via Radical Decomposition and Rendering , 2020, ECCV.

[22]  Han Fang,et al.  Triple-GAN: Progressive Face Aging with Triple Translation Loss , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[23]  Meng Yang,et al.  Semantic GAN: Application for Cross-Domain Image Style Transfer , 2019, 2019 IEEE International Conference on Multimedia and Expo (ICME).

[24]  Jia Jia,et al.  Aesthetic-Aware Image Style Transfer , 2020, ACM Multimedia.

[25]  Yuan Liu,et al.  Improved generative adversarial network and its application in image oil painting style transfer , 2021, Image Vis. Comput..

[26]  Mai Xu,et al.  Wavelet Domain Style Transfer for an Effective Perception-Distortion Tradeoff in Single Image Super-Resolution , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[27]  Peng Luo,et al.  Multi-camera transfer GAN for person re-identification , 2019, J. Vis. Commun. Image Represent..

[28]  Ming Lu,et al.  Decoder Network over Lightweight Reconstructed Feature for Fast Semantic Style Transfer , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[29]  Xin Lin,et al.  Style Transfer for Anime Sketches with Enhanced Residual U-net and Auxiliary Classifier GAN , 2017, 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR).

[30]  Ning Xu,et al.  Controllable Artistic Text Style Transfer via Shape-Matching GAN , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[31]  Jue Wang,et al.  SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer , 2020, IEEE Transactions on Image Processing.

[32]  Franco Scarselli,et al.  Image generation by GAN and style transfer for agar plate image segmentation , 2019, Comput. Methods Programs Biomed..