No training blind image quality assessment

State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further predict the quality scores of the testing images. However, these methods need complicated training and learning, and the evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics without training and learning are firstly proposed. The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the dictionary. Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly acceptable.

[1]  Martin J. Wainwright,et al.  Scale Mixtures of Gaussians and the Statistics of Natural Images , 1999, NIPS.

[2]  Zhen Ji,et al.  A novel no-reference image quality assessment metric based on statistical independence , 2012, 2012 Visual Communications and Image Processing.

[3]  Xiang Zhu,et al.  Automatic Parameter Selection for Denoising Algorithms Using a No-Reference Measure of Image Content , 2010, IEEE Transactions on Image Processing.

[4]  Zhou Wang,et al.  No-reference perceptual quality assessment of JPEG compressed images , 2002, Proceedings. International Conference on Image Processing.

[5]  Eric C. Larson,et al.  Most apparent distortion: full-reference image quality assessment and the role of strategy , 2010, J. Electronic Imaging.

[6]  Zhou Wang,et al.  Applications of Objective Image Quality Assessment Methods , 2011 .

[7]  Joydeep Ghosh,et al.  Blind Image Quality Assessment Without Human Training Using Latent Quality Factors , 2012, IEEE Signal Processing Letters.

[8]  Alan C. Bovik,et al.  Image quality assessment using natural scene statistics , 2004 .

[9]  Xuanqin Mou,et al.  A novel blind image quality assessment metric and its feature selection strategy , 2013, Electronic Imaging.

[10]  Nikolay N. Ponomarenko,et al.  TID2008 – A database for evaluation of full-reference visual quality assessment metrics , 2004 .

[11]  Alan C. Bovik,et al.  No-reference quality assessment using natural scene statistics: JPEG2000 , 2005, IEEE Transactions on Image Processing.

[12]  Weisi Lin,et al.  A no-reference quality metric for measuring image blur , 2003, Seventh International Symposium on Signal Processing and Its Applications, 2003. Proceedings..

[13]  Lei Zhang,et al.  Learning without Human Scores for Blind Image Quality Assessment , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Zhou Wang,et al.  Applications of Objective Image Quality Assessment Methods [Applications Corner] , 2011, IEEE Signal Processing Magazine.