Video BenchLab: an open platform for realistic benchmarking of streaming media workloads

In this paper, we present an open, flexible and realistic benchmarking platform named Video BenchLab to measure the performance of streaming media workloads. While Video BenchLab can be used with any existing media server, we provide a set of tools for researchers to experiment with their own platform and protocols. The components include a MediaDrop video server, a suite of tools to bulk insert videos and generate streaming media workloads, a dataset of freely available video and a client runtime to replay videos in the native video players of real Web browsers such as Firefox, Chrome and Internet Explorer. We define simple metrics that are able to capture the quality of video playback and identify issues that can happen during video replay. Finally, we provide a Dashboard to manage experiments, collect results and perform analytics to compare performance between experiments. We present a series of experiments with Video BenchLab to illustrate how the video specific metrics can be used to measure the user perceived experience in real browsers when streaming videos. We also show Internet scale experiments by deploying clients in data centers distributed all over the globe. All the software, datasets, workloads and results used in this paper are made freely available on SourceForge for anyone to reuse and expand.

[1]  Azer Bestavros,et al.  GISMO: a Generator of Internet Streaming Media Objects and workloads , 2001, PERV.

[2]  Pablo Rodriguez,et al.  I tube, you tube, everybody tubes: analyzing the world's largest user generated content video system , 2007, IMC '07.

[3]  Mohammad Soleymani,et al.  The Community and the Crowd: Multimedia Benchmark Dataset Development , 2012, IEEE MultiMedia.

[4]  Ludmila Cherkasova,et al.  Building a performance model of streaming media applications in utility data center environment , 2003, CCGrid 2003. 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid, 2003. Proceedings..

[5]  Tim Brecht,et al.  Methodologies for generating HTTP streaming video workloads to evaluate web server performance , 2012, SYSTOR '12.

[6]  Miodrag Potkonjak,et al.  MediaBench: a tool for evaluating and synthesizing multimedia and communications systems , 1997, Proceedings of 30th Annual International Symposium on Microarchitecture.

[7]  Clement H. C. Leung,et al.  Benchmarking for Content-Based Visual Information Search , 2000, VISUAL.

[8]  Michael Zink,et al.  Characteristics of YouTube network traffic at a campus network - Measurements, models, and implications , 2009, Comput. Networks.

[9]  Anant Agarwal,et al.  Versatility and VersaBench: A New Metric and a Benchmark Suite for Flexible Architectures , 2004 .

[10]  Zongpeng Li,et al.  Youtube traffic characterization: a view from the edge , 2007, IMC '07.

[11]  Shudong Jin,et al.  GISMO: Generator of Streaming Media Objects and Workloads , 2001, SIGMETRICS 2001.

[12]  Marcel Worring,et al.  Benchmarking image and video retrieval: an overview , 2006, MIR '06.

[13]  Virgílio A. F. Almeida,et al.  A hierarchical characterization of a live streaming media workload , 2006, TNET.

[14]  Prashant J. Shenoy,et al.  BenchLab: An Open Testbed for Realistic Benchmarking of Web Applications , 2011, WebApps.

[15]  Mark Claypool,et al.  Characteristics of streaming media stored on the Web , 2005, TOIT.

[16]  Alan Jay Smith,et al.  Design and characterization of the Berkeley multimedia workload , 2002, Multimedia Systems.

[17]  Kai Li,et al.  The PARSEC benchmark suite: Characterization and architectural implications , 2008, 2008 International Conference on Parallel Architectures and Compilation Techniques (PACT).

[18]  Alec Wolman,et al.  Measurement and Analysis of a Streaming Media Workload , 2001, USITS.