No-reference screen content image quality assessment based on multi-region features

Abstract Unlike natural image captured with cameras, screen content image (SCI) is a composite image including textual and pictorial regions. The different characteristics lead to many difficulties in image quality assessment (IQA). Most existing models based on the convolutional neural network (CNN) divide large SCIs into image patches to increase training samples for CNN training. This brings two problems: (1) a single image patch can not represent the quality of the entire image, especially in IQA of SCI; (2) SCI patches of an entire image degraded by the same distortion type and strength may have drastically different quality. In addition, these models adopt the mean square error (MSE) between the predicted quality and the subjective differential mean opinion score (DMOS) to train the CNN, without considering quality ranking between different SCIs. In this paper, we propose a novel no-reference (NR) IQA model based on the convolutional neural network (CNN). The contributions of our algorithm can be concluded as follows: (1) considering a large difference exists in different regions in a SCI, the pseudo global features generated with multi-region local features are utilized for quality evaluation, which better reflect image quality than local features of each image patch; (2) the noise classification task is used as an auxiliary task which aids the quality score prediction task to improve the representation ability; (3) the Siamese networks are used to predict the quality scores of two different SCIs, and a new ranking loss is proposed to rank the predicted scores, aiming to enhance the ability of the model to rank image in terms of quality. Experimental results verify that our model outperforms all test NR IQA methods and full-reference (FR) IQA methods on the screen content image quality assessment database (SIQAD).

[1]  Chunping Hou,et al.  Blind quality assessment for screen content images via convolutional neural network , 2019, Digit. Signal Process..

[2]  Jun Luo,et al.  No-Reference Quality Assessment for Screen Content Images Based on Hybrid Region Features Fusion , 2019, IEEE Transactions on Multimedia.

[3]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[4]  Joost van de Weijer,et al.  RankIQA: Learning from Rankings for No-Reference Image Quality Assessment , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[5]  Alan C. Bovik,et al.  Image information and visual quality , 2006, IEEE Trans. Image Process..

[6]  Bo Yan,et al.  An accurate deep convolutional neural networks model for no-reference image quality assessment , 2017, 2017 IEEE International Conference on Multimedia and Expo (ICME).

[7]  Jie Fu,et al.  Screen content image quality assessment via convolutional neural network , 2016, 2016 IEEE International Conference on Image Processing (ICIP).

[8]  Yi Li,et al.  Convolutional Neural Networks for No-Reference Image Quality Assessment , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Xuhao Jiang,et al.  Naturalization Module in Neural Networks for Screen Content Image Quality Assessment , 2018, IEEE Signal Processing Letters.

[10]  Weisi Lin,et al.  Learning a Unified Blind Image Quality Metric via On-Line and Off-Line Big Training Instances , 2020, IEEE Transactions on Big Data.

[11]  Ke Gu,et al.  No-Reference Quality Assessment of Screen Content Pictures , 2017, IEEE Transactions on Image Processing.

[12]  Sebastian Bosse,et al.  Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment , 2016, IEEE Transactions on Image Processing.

[13]  Weisi Lin,et al.  A Fast Reliable Image Quality Predictor by Fusing Micro- and Macro-Structures , 2017, IEEE Transactions on Industrial Electronics.

[14]  Sanghoon Lee,et al.  Fully Deep Blind Image Quality Predictor , 2017, IEEE Journal of Selected Topics in Signal Processing.

[15]  Xiangchu Feng,et al.  Edge Strength Similarity for Image Quality Assessment , 2013, IEEE Signal Processing Letters.

[16]  Yann LeCun,et al.  Learning a similarity metric discriminatively, with application to face verification , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[17]  Weisi Lin,et al.  Saliency-Guided Quality Assessment of Screen Content Images , 2016, IEEE Transactions on Multimedia.

[18]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[19]  Hongyu Li,et al.  VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment , 2014, IEEE Transactions on Image Processing.

[20]  David S. Doermann,et al.  Beyond Human Opinion Scores: Blind Image Quality Assessment Based on Synthetic Scores , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[21]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Fei Gao,et al.  Biologically inspired image quality assessment , 2016, Signal Process..

[23]  Daniel Thalmann,et al.  Evaluating Quality of Screen Content Images Via Structural Variation Analysis , 2018, IEEE Transactions on Visualization and Computer Graphics.

[24]  Christophe Charrier,et al.  Blind Image Quality Assessment: A Natural Scene Statistics Approach in the DCT Domain , 2012, IEEE Transactions on Image Processing.

[25]  Fei Gao,et al.  DeepSim: Deep similarity for image quality assessment , 2017, Neurocomputing.

[26]  Jongyoo Kim,et al.  Deep CNN-Based Blind Image Quality Predictor , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[27]  Lai-Man Po,et al.  No-reference image quality assessment with shearlet transform and deep neural networks , 2015, Neurocomputing.

[28]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Leida Li,et al.  No Reference Quality Assessment for Screen Content Images With Both Local and Global Feature Representation , 2018, IEEE Transactions on Image Processing.

[30]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Zhengfang Duanmu,et al.  End-to-End Blind Image Quality Assessment Using Deep Neural Networks , 2018, IEEE Transactions on Image Processing.

[32]  Kai-Kuang Ma,et al.  ESIM: Edge Similarity for Screen Content Image Quality Assessment , 2017, IEEE Transactions on Image Processing.

[33]  Weisi Lin,et al.  Perceptual Quality Assessment of Screen Content Images , 2015, IEEE Transactions on Image Processing.

[34]  Yong Li,et al.  Image quality assessment using deep convolutional networks , 2017 .

[35]  Sam Kwong,et al.  Toward Accurate Quality Estimation of Screen Content Pictures With Very Sparse Reference Information , 2020, IEEE Transactions on Industrial Electronics.

[36]  Xinbo Gao,et al.  No-reference image quality assessment in contourlet domain , 2010, Neurocomputing.

[37]  Lei Zhang,et al.  Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index , 2013, IEEE Transactions on Image Processing.

[38]  Xuanqin Mou,et al.  Quality Assessment of Screen Content Images via Convolutional-Neural-Network-Based Synthetic/Natural Segmentation , 2018, IEEE Transactions on Image Processing.

[39]  Kai-Kuang Ma,et al.  A Gabor Feature-Based Quality Assessment Model for the Screen Content Images , 2018, IEEE Transactions on Image Processing.

[40]  Jiaying Liu,et al.  Objective Quality Assessment of Screen Content Images by Uncertainty Weighting , 2017, IEEE Transactions on Image Processing.

[41]  Kai-Kuang Ma,et al.  Screen Content Image Quality Assessment Using Multi-Scale Difference of Gaussian , 2018, IEEE Transactions on Circuits and Systems for Video Technology.

[42]  Alan C. Bovik,et al.  No-Reference Image Quality Assessment in the Spatial Domain , 2012, IEEE Transactions on Image Processing.