A linear regression framework for assessing time-varying subjective quality in HTTP streaming

In an HTTP streaming framework, continuous time quality evaluation is necessary to monitor the time-varying subjective quality (TVSQ) of the videos resulting from rate adaptation. In this paper, we present a novel learning framework for TVSQ assessment using linear regression under the Reduced-Reference (RR) and the No-Reference (NR) settings. The proposed framework relies on objective short time quality estimates and past TVSQs for predicting the present TVSQ. Specifically, we rely on spatio-temporal reduced reference en-tropic differencing for RR and on a 3D convolutional neural network for NR quality estimations. While the proposed RR-TVSQ model delivers competitive performance with state-of-the-art methods, the proposed NR-TVSQ model outperforms state-of-the-art algorithms over the LIVE QoE database.

[1]  Lorenzo Torresani,et al.  Learning Spatiotemporal Features with 3D Convolutional Networks , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[2]  Vijayan K. Asari,et al.  No-Reference Video Quality Assessment Based on Artifact Measurement and Statistical Analysis , 2015, IEEE Transactions on Circuits and Systems for Video Technology.

[3]  Ming Yang,et al.  3D Convolutional Neural Networks for Human Action Recognition , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Damon M. Chandler,et al.  A spatiotemporal most-apparent-distortion model for video quality assessment , 2011, 2011 18th IEEE International Conference on Image Processing.

[5]  Don E. Pearson,et al.  Viewer response to time-varying video quality , 1998, Electronic Imaging.

[6]  Lai-Man Po,et al.  No-Reference Video Quality Assessment With 3D Shearlet Transform and Convolutional Neural Networks , 2016, IEEE Transactions on Circuits and Systems for Video Technology.

[7]  Sumohana S. Channappayya,et al.  An optical flow-based no-reference video quality assessment algorithm , 2016, 2016 IEEE International Conference on Image Processing (ICIP).

[8]  Nagabhushan Eswara,et al.  eTVSQ based video rate adaptation in cellular networks with α-fair resource allocation , 2016, 2016 IEEE Wireless Communications and Networking Conference.

[9]  Gustavo de Veciana,et al.  Video Quality Assessment on Mobile Devices: Subjective, Behavioral and Objective Studies , 2012, IEEE Journal of Selected Topics in Signal Processing.

[10]  Christophe Charrier,et al.  Blind Prediction of Natural Video Quality , 2014, IEEE Transactions on Image Processing.

[11]  Rajiv Soundararajan,et al.  Video Quality Assessment by Reduced Reference Spatio-Temporal Entropic Differencing , 2013, IEEE Transactions on Circuits and Systems for Video Technology.

[12]  Alan C. Bovik,et al.  Motion Tuned Spatio-Temporal Quality Assessment of Natural Videos , 2010, IEEE Transactions on Image Processing.

[13]  Zhou Wang,et al.  Multiscale structural similarity for image quality assessment , 2003, The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003.

[14]  Alan C. Bovik,et al.  A Completely Blind Video Integrity Oracle , 2016, IEEE Transactions on Image Processing.

[15]  Gustavo de Veciana,et al.  Modeling the Time—Varying Subjective Quality of HTTP Video Streams With Rate Adaptations , 2013, IEEE Transactions on Image Processing.