Massive-scale Big Data analytics is representative of a new class of workloads that justifies a rethinking of how computing systems should be optimized. This paper addresses the need for a set of benchmarks that system designers can use to measure the quality of their designs and that customers can use to evaluate competing systems offerings with respect to commonly performed text-oriented workflows in Hadoop™. Additions are needed to existing benchmarks such as HiBench in terms of both scale and relevance. We describe a methodology for creating a petascale data-size text-oriented benchmark that includes representative Big Data workflows and can be used to test total system performance, with demands balanced across storage, network, and computation. Creating such a benchmark requires meeting unique challenges associated with the data size and its often unstructured nature. To be useful, the benchmark also needs to be sufficiently generic to be accepted by the community at large. Here, we focus on a text-oriented Hadoop workflow that consists of three common tasks: categorizing text documents, identifying significant documents within each category, and analyzing significant documents for new topic creation.
[1]
Chaitanya K. Baru,et al.
Setting the Direction for Big Data Benchmark Standards
,
2012,
TPCTC.
[2]
Aleksandra Mojsilovic,et al.
A Family of Non-negative Matrix Factorizations for One-Class Collaborative Filtering Problems
,
2009
.
[3]
Yanpei Chen.
We Don't Know Enough to make a Big Data Benchmark Suite - An Academia-Industry View
,
2012
.
[4]
H. Peter Hofstee,et al.
Understanding system design for Big Data workloads
,
2013,
IBM J. Res. Dev..
[5]
Yiming Yang,et al.
RCV1: A New Benchmark Collection for Text Categorization Research
,
2004,
J. Mach. Learn. Res..
[6]
Jie Huang,et al.
The HiBench benchmark suite: Characterization of the MapReduce-based data analysis
,
2010,
2010 IEEE 26th International Conference on Data Engineering Workshops (ICDEW 2010).