Cloud computing has become more popular for a decade; it has been under continuous development with advances in architecture, software, and network. Hadoop-MapReduce is a common software framework processing parallelizable problem across big datasets using a distributed cluster of processors or stand-alone computers. Cloud Hadoop-MapReduce can scale incrementally in the number of processing nodes. Hence, the Hadoop-MapReduce is designed to provide a processing platform with powerful computation. Network traffic is always a most important bottleneck in data-intensive computing and network latency decreases significant performance in data parallel systems. Network bottleneck is caused by network bandwidth and the network speed is much slower than disk data access. So that, good data locality can reduces network traffic and increases performance in data-intensive HPC systems. However, Hadoop's scheduler has a defect of data locality in resource assignment. In this paper, we present a locality-aware scheduling algorithm (LaSA) for Hadoop-MapReduce scheduler. Firstly, we propose a mathematical model of weight of data interference in Hadoop scheduler. Secondly, we present the LaSA algorithm to use weight of data interference to provide data locality-aware resource assignment in Hadoop scheduler. Finally, we build an experimental environment with 3 cluster and 35 VMs to verify the LaSA's performance.
[1]
Eero Vainikko,et al.
SciCloud: Scientific Computing on the Cloud
,
2010,
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing.
[2]
Thomas Hofmann,et al.
Map-Reduce for Machine Learning on Multicore
,
2007
.
[3]
Sanjay Ghemawat,et al.
MapReduce: Simplified Data Processing on Large Clusters
,
2004,
OSDI.
[4]
GhemawatSanjay,et al.
The Google file system
,
2003
.
[5]
John L. Klepeis,et al.
A scalable parallel framework for analyzing terascale molecular dynamics simulation trajectories
,
2008,
2008 SC - International Conference for High Performance Computing, Networking, Storage and Analysis.