Let there be color!

We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.

[1]  Dani Lischinski,et al.  Colorization using optimization , 2004, SIGGRAPH 2004.

[2]  Fabio Pellacini,et al.  AppProp: all-pairs appearance-space edit propagation , 2008, ACM Trans. Graph..

[3]  Rob Fergus,et al.  Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-scale Convolutional Architecture , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[4]  Kavita Bala,et al.  Learning visual similarity for product design with convolutional neural networks , 2015, ACM Trans. Graph..

[5]  Fabio Pellacini,et al.  AppProp: all-pairs appearance-space edit propagation , 2008, SIGGRAPH 2008.

[6]  Stephen Lin,et al.  Intrinsic colorization , 2008, ACM Trans. Graph..

[7]  Bolei Zhou,et al.  Learning Deep Features for Scene Recognition using Places Database , 2014, NIPS.

[8]  Klaus Mueller,et al.  Transferring color to greyscale images , 2002, ACM Trans. Graph..

[9]  Thomas Brox,et al.  Striving for Simplicity: The All Convolutional Net , 2014, ICLR.

[10]  Tien-Tsin Wong,et al.  Manga colorization , 2006, ACM Trans. Graph..

[11]  Limin Wang,et al.  Places205-VGGNet Models for Scene Recognition , 2015, ArXiv.

[12]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[13]  Guillermo Sapiro,et al.  Fast image and video colorization using chrominance blending , 2006, IEEE Transactions on Image Processing.

[14]  Abhinav Gupta,et al.  Designing deep networks for surface normal estimation , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Dani Lischinski,et al.  Colorization by example , 2005, EGSR '05.

[16]  Trevor Darrell,et al.  Fully Convolutional Networks for Semantic Segmentation , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Shi-Min Hu,et al.  Efficient affinity-based edit propagation using K-D tree , 2009, ACM Trans. Graph..

[18]  Yan Wang,et al.  DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Bin Sheng,et al.  Deep Colorization , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[20]  Dani Lischinski,et al.  Colorization using optimization , 2004, ACM Trans. Graph..

[21]  Thomas Brox,et al.  FlowNet: Learning Optical Flow with Convolutional Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[22]  Xing Mei,et al.  Content‐Based Colour Transfer , 2013, Comput. Graph. Forum.

[23]  Shi-Min Hu,et al.  Instant Propagation of Sparse Edits on Images and Videos , 2010, Comput. Graph. Forum.

[24]  Jun-Cheng Chen,et al.  An adaptive edge detection based colorization algorithm and its applications , 2005, ACM Multimedia.

[25]  Li Xu,et al.  A sparse control model for image and video editing , 2013, ACM Trans. Graph..

[26]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[27]  John Dingliana,et al.  LazyBrush: Flexible Painting Tool for Hand‐drawn Cartoons , 2009, Comput. Graph. Forum.

[28]  Xiaowu Chen,et al.  Manifold preserving edit propagation , 2012, ACM Trans. Graph..

[29]  Chi-Keung Tang,et al.  Local color transfer via probabilistic segmentation by expectation-maximization , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[30]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[31]  Deepu Rajan,et al.  Image colorization using similar images , 2012, ACM Multimedia.

[32]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[33]  Zhuowen Tu,et al.  Training Deeper Convolutional Networks with Deep Supervision , 2015, ArXiv.

[34]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[35]  Kunihiko Fukushima,et al.  Neocognitron: A hierarchical neural network capable of visual pattern recognition , 1988, Neural Networks.

[36]  Bernhard Schölkopf,et al.  Automatic Image Colorization Via Multimodal Predictions , 2008, ECCV.

[37]  Harry Shum,et al.  Natural Image Colorization , 2007, Rendering Techniques.

[38]  Matthew D. Zeiler ADADELTA: An Adaptive Learning Rate Method , 2012, ArXiv.

[39]  François Pitié,et al.  Automated colour grading using colour distribution transfer , 2007, Comput. Vis. Image Underst..

[40]  Yann LeCun,et al.  Signature Verification Using A "Siamese" Time Delay Neural Network , 1993, Int. J. Pattern Recognit. Artif. Intell..

[41]  Xiaoou Tang,et al.  Image Super-Resolution Using Deep Convolutional Networks , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[42]  Erik Reinhard,et al.  Color Transfer between Images , 2001, IEEE Computer Graphics and Applications.

[43]  Stephen Lin,et al.  Semantic colorization with internet images , 2011, ACM Trans. Graph..