The exponential growth of data requires systems that are able to provide a scalable and fault-tolerant infrastructure for storage and processing of vast amount of data efficiently. Hive is a MapReduce-based data warehouse for data aggregation and query analysis. This data warehousing system can arrange millions of rows of data into tables, and its data placement structures play a significant role for increasing the performance of this data warehouse. Hive also provides SQL-like language called HiveQL, which is able to compile MapReduce jobs into queries on Hadoop. In this paper, we measure the efficiency of these data placement structures (Record Columnar File (RCFile) and Optimize Record Columnar File (ORCFile)) in terms of data loading, storage and query processing using MapReduce framework. The experimental results showed the effectiveness of these data placement structures for Hive data warehousing systems. Index Terms Big Data; Hive; MapReduce;
[1]
Zheng Shao,et al.
Hive - a petabyte scale data warehouse using Hadoop
,
2010,
2010 IEEE 26th International Conference on Data Engineering (ICDE 2010).
[2]
Yuan Yuan,et al.
Major technical advancements in apache hive
,
2014,
SIGMOD Conference.
[3]
Siyuan Ma,et al.
Understanding Insights into the Basic Structure and Essential Issues of Table Placement Methods in Clusters
,
2013,
Proc. VLDB Endow..
[4]
Zhiwei Xu,et al.
RCFile: A fast and space-efficient data placement structure in MapReduce-based warehouse systems
,
2011,
2011 IEEE 27th International Conference on Data Engineering.