Recently, Hadoop has become a common programming framework for big data analysis on a cluster of commodity machines. To optimize queries on a large amount of data managed by the Hadoop Distributed File System (HDFS), it is particularly important to optimize the reading of the data. Previous works either designed file formats to cluster data belonging to the same column, or proposed to place correlated data onto the same physical nodes. In query-workload aware situation, a possible optimization strategy is to place data that may not be used by the same query into different logical partitions so that not every partition is needed for a query, while physically distribute the data in each partition evenly across the compute nodes. This paper proposes a condition-based partitioning scheme to implement this optimization strategy. Experiments show that the proposed scheme not only reduces the I/O cost, but also maintains the workload of the compute nodes balanced across the cluster.
[1]
Beng Chin Ooi,et al.
Llama: leveraging columnar storage for scalable join processing in the MapReduce framework
,
2011,
SIGMOD '11.
[2]
Yuanyuan Tian,et al.
CoHadoop: Flexible Data Placement and Its Exploitation in Hadoop
,
2011,
Proc. VLDB Endow..
[3]
Sanjay Ghemawat,et al.
MapReduce: Simplified Data Processing on Large Clusters
,
2004,
OSDI.
[4]
Zhiwei Xu,et al.
RCFile: A fast and space-efficient data placement structure in MapReduce-based warehouse systems
,
2011,
2011 IEEE 27th International Conference on Data Engineering.