Image Colorization Using Generative Adversarial Networks and Transfer Learning

Automatic colorizing is one of the most interesting problems in computer graphics. During the colorization process, the gray one-dimensional images are converted to three-dimensional images with colored components. As a typical technique, Convolutional neural networks (CNNs) have been well studied and used for automatic coloring. In these networks, the information that is generalized over in the top layers is available in intermediate layers. Although the output of the last layer of CNNs is usually used in many applications, in this paper, we use a concept called "Hypercolumn" derived from neuroscience to exploit information at all levels to develop a fully automated image colorization system. There are not always millions of data available in the real world to train complex deep learning models. Therefore, the VGG19 model trained with the big data set of ImageNet is used as a pre-trained model in the generator network and the hypercolumn idea is implemented in it with DIV2K datasets. We train our model to predict each pixel’s color texture. The results obtained indicate that the proposed method is superior to competing models.

[1]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[2]  Dani Lischinski,et al.  Colorization using optimization , 2004, ACM Trans. Graph..

[3]  Taesung Park,et al.  Semantic Image Synthesis With Spatially-Adaptive Normalization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Eirikur Agustsson,et al.  NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[5]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[6]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[8]  Michal Kawulok,et al.  Competitive image colorization , 2010, 2010 IEEE International Conference on Image Processing.

[9]  Luc Van Gool,et al.  NTIRE 2017 Challenge on Single Image Super-Resolution: Methods and Results , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[10]  Hendrik P. A. Lensch,et al.  Infrared Colorization Using Deep Convolutional Neural Networks , 2016, 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA).

[11]  Jaakko Lehtinen,et al.  Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.

[12]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[13]  Hiroshi Ishikawa,et al.  Let there be color! , 2016, ACM Trans. Graph..

[14]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[15]  Deepu Rajan,et al.  Image colorization using similar images , 2012, ACM Multimedia.

[16]  Harry Shum,et al.  Natural Image Colorization , 2007, Rendering Techniques.

[17]  Jitendra Malik,et al.  Hypercolumns for object segmentation and fine-grained localization , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Jake Bouvrie,et al.  Notes on Convolutional Neural Networks , 2006 .

[19]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[20]  Radu Timofte,et al.  NTIRE 2019 Challenge on Image Colorization: Report , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[21]  Luc Van Gool,et al.  NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).