Distributed computing splits a large-scale job into multiple tasks and deals with them on clusters. Cluster resource allocation is the key point to restrict the efficiency of distributed computing platform. Hadoop is the current most popular open-source distributed platform. However, the existing scheduling strategies in Hadoop are kind of simple and cannot meet the needs such as sharing the cluster for multi-user, ensuring a concept of guaranteed capacity for each job, as well as providing good performance for interactive jobs. This paper researches the existing scheduling strategies, analyses the inadequacy and adds three new features in Hadoop which can raise the weight of job temporarily, grab cluster resources by higher-priority jobs and support the computing resources share among multi-user. Experiments show they can help in providing better performance for interactive jobs, as well as more fairly share of computing time among users.
[1]
Jimmy J. Lin.
Brute force and indexed approaches to pairwise document similarity comparisons with MapReduce
,
2009,
SIGIR.
[2]
Kunle Olukotun,et al.
Map-Reduce for Machine Learning on Multicore
,
2006,
NIPS.
[3]
Randy H. Katz,et al.
Improving MapReduce Performance in Heterogeneous Environments
,
2008,
OSDI.
[4]
Ralf Lämmel,et al.
Google's MapReduce programming model - Revisited
,
2007,
Sci. Comput. Program..
[5]
Sanjay Ghemawat,et al.
MapReduce: Simplified Data Processing on Large Clusters
,
2004,
OSDI.