No-Reference Image Sharpness Assessment Based on Rank Learning

To address the label shortage problem and the fixed-size input constraint of CNN models, we propose a no-reference image sharpness assessment method based on rank learning and effective patch extraction. First, we train a Siamese mobilenet network by learning quality ranks among the synthetically blurred and unsharpen seed images without any human label, which provides effective prior knowledge about the appropriate image sharpness. The extracted single branch finetuned on benchmark datasets is used to predict the subjective rating regarding image sharpness. While the performance of CNN based IQA metrics is compromised due to their fixed-size input constraint, we design a multi-scale gradient guided patch extraction method to increase the quality-sensitive input information and boost the performance. Extensive experimental results on the six public datasets demonstrate that our approach outperforms other state-of-the-art no-reference image sharpness assessment metrics.

[1]  Radomír Mech,et al.  Photo Aesthetics Ranking Network with Attributes and Content Adaptation , 2016, ECCV.

[2]  Alex ChiChung Kot,et al.  A Fast Approach for No-Reference Image Sharpness Assessment Based on Maximum Local Variation , 2014, IEEE Signal Processing Letters.

[3]  Weisi Lin,et al.  No-Reference Image Sharpness Assessment in Autoregressive Parameter Space , 2015, IEEE Transactions on Image Processing.

[4]  Joost van de Weijer,et al.  RankIQA: Learning from Rankings for No-Reference Image Quality Assessment , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[5]  Mikko Nuutinen,et al.  CID2013: A Database for Evaluating No-Reference Image Quality Assessment Algorithms , 2015, IEEE Transactions on Image Processing.

[6]  Alexandre G. Ciancio,et al.  No-Reference Blur Assessment of Digital Pictures Based on Multifeature Classifiers , 2011, IEEE Transactions on Image Processing.

[7]  Eric C. Larson,et al.  Most apparent distortion: full-reference image quality assessment and the role of strategy , 2010, J. Electronic Imaging.

[8]  Nikolay N. Ponomarenko,et al.  Color image database TID2013: Peculiarities and preliminary results , 2013, European Workshop on Visual Information Processing (EUVIP).

[9]  Ling Shao,et al.  Blind Image Blur Estimation via Deep Learning. , 2016, IEEE transactions on image processing : a publication of the IEEE Signal Processing Society.

[10]  Alan C. Bovik,et al.  A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms , 2006, IEEE Transactions on Image Processing.

[11]  Weisi Lin,et al.  Image Sharpness Assessment by Sparse Representation , 2016, IEEE Transactions on Multimedia.

[12]  Naila Murray,et al.  AVA: A large-scale database for aesthetic visual analysis , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Konstantinos N. Plataniotis,et al.  Image Sharpness Metric Based on Maxpol Convolution Kernels , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).

[14]  Yi Li,et al.  Convolutional Neural Networks for No-Reference Image Quality Assessment , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[15]  Nikolay N. Ponomarenko,et al.  TID2008 – A database for evaluation of full-reference visual quality assessment metrics , 2004 .

[16]  Damon M. Chandler,et al.  ${\bf S}_{3}$: A Spectral and Spatial Measure of Local Perceived Sharpness in Natural Images , 2012, IEEE Transactions on Image Processing.

[17]  Weisi Lin,et al.  No-Reference and Robust Image Sharpness Evaluation Based on Multiscale Spatial and Spectral Features , 2017, IEEE Transactions on Multimedia.

[18]  Lei Wang,et al.  A shallow convolutional neural network for blind image sharpness assessment , 2017, PloS one.

[19]  J. Canny A Computational Approach to Edge Detection , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.