One critical part of building and running a data warehouse is the ETL (Extraction Transformation Loading) process. In fact, the growing ETL tool market is already a multi-billion-dollar market. Getting data into data warehouses has been a hindering factor to wider potential database applications such as scientific computing, as discussed in recent panels at various database conferences. One particular problem with the current load approaches to data warehouses is that while data are partitioned and replicated across all nodes in data warehouses powered by parallel DBMS(PDBMS), load utilities typically reside on a single node which face the issues of i) data loss/data availability if the node/hard drives crash; ii) file size limit on a single node; iii) load performance. All of these issues are mostly handled manually or only helped to some degree by tools. We notice that one common thing between Hadoop and Teradata Enterprise Data Warehouse (EDW) is that data in both systems are partitioned across multiple nodes for parallel computing, which creates parallel loading opportunities not possible for DBMSs running on a single node. In this paper we describe our approach of using Hadoop as a distributed load strategy to Teradata EDW. We use Hadoop as the intermediate load server to store data to be loaded to Teradata EDW. We gain all the benefits from HDFS (Hadoop Distributed File System): i) significantly increased disk space for the file to be loaded; ii) once the data is written to HDFS, it is not necessary for the data sources to keep the data even before the file is loaded to Teradata EDW; iii) MapReduce programs can be used to transform and add structures to unstructured or semi-structured data; iv) more importantly since a file is distributed in HDFS, the file can be loaded more quickly in parallel to Teradata EDW, which is the main focus in this paper. When both Hadoop and Teradata EDW coexist on the same hardware platform, as being increasingly required by customers because of reduced hardware and system administration costs, we have another optimization opportunity to directly load HDFS data blocks to Teradata parallel units on the same nodes. However, due to the inherent non-uniform data distribution in HDFS, rarely we can avoid transferring HDFS blocks to remote Teradata nodes. We designed a polynomial time optimal algorithm and a polynomial time approximate algorithm to assign HDFS blocks to Teradata parallel units evenly and minimize network traffic. We performed experiments on synthetic and real data sets to compare the performances of the algorithms.
[1]
Zheng Shao,et al.
Hive - a petabyte scale data warehouse using Hadoop
,
2010,
2010 IEEE 26th International Conference on Data Engineering (ICDE 2010).
[2]
Abraham Silberschatz,et al.
HadoopDB in action: building real world applications
,
2010,
SIGMOD Conference.
[3]
Vinay Setty,et al.
Hadoop++: Making a Yellow Elephant Run Like a Cheetah (Without It Even Noticing)
,
2010,
Proc. VLDB Endow..
[4]
Christopher Olston,et al.
Building a HighLevel Dataflow System on top of MapReduce: The Pig Experience
,
2009,
Proc. VLDB Endow..
[5]
R. K. Shyamasundar,et al.
Introduction to algorithms
,
1996
.
[6]
David K. Smith.
Network Flows: Theory, Algorithms, and Applications
,
1994
.
[7]
Sanjay Ghemawat,et al.
MapReduce: Simplified Data Processing on Large Clusters
,
2004,
OSDI.
[8]
Yu Xu,et al.
Integrating hadoop and parallel DBMs
,
2010,
SIGMOD Conference.
[9]
Songting Chen,et al.
Cheetah
,
2010,
Proc. VLDB Endow..