The Effect of Color Channel Representations on the Transferability of Convolutional Neural Networks

Image classification is one of the most important tasks in computer vision, since it can be used to retrieve, store, organize, and analyze digital images. In recent years, deep learning convolutional neural networks have been successfully used to classify images surpassing previous state of the art performances. Moreover, using transfer learning techniques, very complex models have been successfully utilized for other tasks different from the original task for which they were trained for. Here, the influence of the color representation of the input images was tested when using a transfer learning technique in three different well-known convolutional models. The experimental results showed that color representation in the CIE-L*a*b* color space gave reasonably good results compared to the RGB color format originally used during training. These results support the idea that the features learned can be transferred to new models with images using different color channels such as the CIE-L*a*b* space, and opens up new research questions as to the transferability of image representation in convolutional neural networks.

[1]  Muktabh Mayank Srivastava,et al.  Visual aesthetic analysis using deep neural network: model and techniques to increase accuracy without transfer learning , 2017, ArXiv.

[2]  Razvan Pascanu,et al.  Deep Learners Benefit More from Out-of-Distribution Examples , 2011, AISTATS.

[3]  Stefan Carlsson,et al.  CNN Features Off-the-Shelf: An Astounding Baseline for Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[4]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[5]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[6]  Aleksandra Kawala-Janik,et al.  YUV vs RGB-Choosing a Color Space for Human-Machine Interaction , 2014, FedCSIS.

[7]  Qiang Yang,et al.  A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.

[8]  Min C. Shin,et al.  Does colorspace transformation make any difference on skin detection? , 2002, Sixth IEEE Workshop on Applications of Computer Vision, 2002. (WACV 2002). Proceedings..

[9]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Dayong Shen,et al.  Traffic Sign Recognition Using Kernel Extreme Learning Machines With Deep Perceptual Features , 2017, IEEE Transactions on Intelligent Transportation Systems.

[11]  Toyotaro Suzumura,et al.  An Out-of-the-box Full-Network Embedding for Convolutional Neural Networks , 2017, 2018 IEEE International Conference on Big Knowledge (ICBK).

[12]  Li Fei-Fei,et al.  ImageNet: A large-scale hierarchical image database , 2009, CVPR.

[13]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  Yoshua Bengio,et al.  How transferable are features in deep neural networks? , 2014, NIPS.

[15]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[16]  Jiebo Luo,et al.  Multi-modal deep feature learning for RGB-D object detection , 2017, Pattern Recognit..

[17]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[18]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).