Subjective and objective study of sharpness enhanced UGC video quality

With the popularity of video sharing applications and video conferencing systems, there has been a growth of interest to measure and enhance the quality of videos captured and transmitted by those applications. While assessing the quality of UGC videos itself is still an open question, it is even challenging to enhance the perceptual quality of UGC videos with unknown characteristics. In this work, we study the potential to enhance the quality of UGC videos by increasing the sharpen effects. To this end, we construct a subjective dataset by conducting a massive online crowdsourcing. The dataset consists of 1200 sharpness enhanced UGC videos processed from 200 UGC source videos. During subjective test, each processed video is compared with its source to capture finegrained quality difference. We propose a statistical model to precisely measure whether the quality is enhanced or degraded. Moreover, we benchmark state-of-the-art No-Reference image or video quality metrics with the collected subjective data. It is observed that most metrics do not correlate well with subjective score. This indicates the need to develop more reliable objective metrics for UGC videos.

[1]  Mikko Nuutinen,et al.  CVD2014—A Database for Evaluating No-Reference Video Quality Assessment Algorithms , 2016, IEEE Transactions on Image Processing.

[2]  Alan C. Bovik,et al.  UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content , 2020, IEEE Transactions on Image Processing.

[3]  David S. Doermann,et al.  Unsupervised feature learning framework for no-reference image quality assessment , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  ITU-T Rec. P.910 (04/2008) Subjective video quality assessment methods for multimedia applications , 2009 .

[5]  Alan C. Bovik,et al.  Making a “Completely Blind” Image Quality Analyzer , 2013, IEEE Signal Processing Letters.

[6]  Balu Adsumilli,et al.  YouTube UGC Dataset for Video Compression Research , 2019, 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP).

[7]  Alan C. Bovik,et al.  In-Capture Mobile Video Distortions: A Study of Subjective Behavior and Objective Algorithms , 2018, IEEE Transactions on Circuits and Systems for Video Technology.

[8]  Xin Jin,et al.  VideoSet: A large-scale compressed video quality dataset based on JND measurement , 2017, J. Vis. Commun. Image Represent..

[9]  Dietmar Saupe,et al.  The Konstanz natural video database (KoNViD-1k) , 2017, 2017 Ninth International Conference on Quality of Multimedia Experience (QoMEX).

[10]  Lei Zhang,et al.  Blind Image Quality Assessment Using Joint Statistics of Gradient Magnitude and Laplacian Features , 2014, IEEE Transactions on Image Processing.

[11]  Yong Liu,et al.  Blind Image Quality Assessment Based on High Order Statistics Aggregation , 2016, IEEE Transactions on Image Processing.

[12]  P. V. Rao,et al.  Ties in Paired-Comparison Experiments: A Generalization of the Bradley-Terry Model , 1967 .

[13]  Jari Korhonen,et al.  Two-Level Approach for No-Reference Consumer Video Quality Assessment , 2019, IEEE Transactions on Image Processing.

[14]  Ming Jiang,et al.  Quality Assessment of In-the-Wild Videos , 2019, ACM Multimedia.

[15]  Margaret H. Pinson,et al.  Temporal Video Quality Model Accounting for Variable Frame Delay Distortions , 2014, IEEE Transactions on Broadcasting.

[16]  Alan C. Bovik,et al.  No-Reference Image Quality Assessment in the Spatial Domain , 2012, IEEE Transactions on Image Processing.