One Self-Adaptive Memory Scheduling Algorithm for the Shuffle Process in Spark Platform

The Shuffle module is one of the core modules in Spark platform, its performance directly influences the performance and throughput of the whole Spark platform. The existing memory scheduling algorithm for the Shuffle process only equitably allocates tasks according to the number of tasks without considering the different memory requirements of different tasks, which causes memory utilization to drop and low running efficiency when data is skewed. To solve this problem, one self-adaptive memory scheduling algorithm for the Shuffle process (SAMSAS) is proposed in this paper, which does not need to set the priority of task processing in advance. Instead, it can adjust memory allocation self-adaptively through constantly monitoring and learning the actual memory requirements of task execution. The experimental results show that SAMSAS algorithm can improve the utilization rate of the entire memory pool and the running efficiency of each Task, and specially it can effectively improve the running efficiency of Spark platform when processing skew data.

[1]  Yuan Yu,et al.  Dryad: distributed data-parallel programs from sequential building blocks , 2007, EuroSys '07.

[2]  Yuqing Zhu,et al.  BigDataBench: A big data benchmark suite from internet services , 2014, 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA).

[3]  Michael J. Franklin,et al.  Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing , 2012, NSDI.

[4]  Sanjay Ghemawat,et al.  MapReduce: Simplified Data Processing on Large Clusters , 2004, OSDI.

[5]  Jared Flatow,et al.  Disco: a computing platform for large-scale data analytics , 2011, Erlang '11.

[6]  Aart J. C. Bik,et al.  Pregel: a system for large-scale graph processing , 2010, SIGMOD Conference.

[7]  Naga K. Govindaraju,et al.  Mars: A MapReduce Framework on graphics processors , 2008, 2008 International Conference on Parallel Architectures and Compilation Techniques (PACT).

[8]  Scott Shenker,et al.  Spark: Cluster Computing with Working Sets , 2010, HotCloud.