Quantification of YouTube QoE via Crowdsourcing

This paper addresses the challenge of assessing and modeling Quality of Experience (QoE) for online video services that are based on TCP-streaming. We present a dedicated QoE model for You Tube that takes into account the key influence factors (such as stalling events caused by network bottlenecks) that shape quality perception of this service. As second contribution, we propose a generic subjective QoE assessment methodology for multimedia applications (like online video) that is based on crowd sourcing - a highly cost-efficient, fast and flexible way of conducting user experiments. We demonstrate how our approach successfully leverages the inherent strengths of crowd sourcing while addressing critical aspects such as the reliability of the experimental data obtained. Our results suggest that, crowd sourcing is a highly effective QoE assessment method not only for online video, but also for a wide range of other current and future Internet applications.

[1]  John C. Platt Using Analytic QP and Sparseness to Speed Training of Support Vector Machines , 1998, NIPS.

[2]  Tobias Hoßfeld,et al.  SOS: The MOS is not enough! , 2011, 2011 Third International Workshop on Quality of Multimedia Experience.

[3]  Peng Dai,et al.  Decision-Theoretic Control of Crowd-Sourced Workflows , 2010, AAAI.

[4]  Phuoc Tran-Gia,et al.  Modeling of crowdsourcing platforms and granularity of work organization in Future Internet , 2011, 2011 23rd International Teletraffic Congress (ITC).

[5]  Rocky K. C. Chang,et al.  Measuring the quality of experience of HTTP video streaming , 2011, 12th IFIP/IEEE International Symposium on Integrated Network Management (IM 2011) and Workshops.

[6]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[7]  Chin-Laung Lei,et al.  Quadrant of euphoria: a crowdsourcing platform for QoE assessment , 2010, IEEE Network.

[8]  Phuoc Tran-Gia,et al.  Cost-Optimal Validation Mechanisms and Cheat-Detection for Crowdsourcing Platforms , 2011, 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing.

[9]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[10]  Krzysztof Z. Gajos,et al.  Toward automatic task design: a progress report , 2010, HCOMP '10.

[11]  Phuoc Tran-Gia,et al.  Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com , 2011, 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing.

[12]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[13]  Manuel Blum,et al.  reCAPTCHA: Human-Based Character Recognition via Web Security Measures , 2008, Science.

[14]  Lydia B. Chilton,et al.  TurKit: Tools for iterative tasks on mechanical turk , 2009, 2009 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC).

[15]  ITU-T Rec. P.910 (04/2008) Subjective video quality assessment methods for multimedia applications , 2009 .

[16]  Markus Fiedler,et al.  A generic quantitative relationship between quality of experience and quality of service , 2010, IEEE Network.

[17]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[18]  Tobias Hoßfeld,et al.  An Evaluation of QoE in Cloud Gaming Based on Subjective Tests , 2011, 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing.