Optimizing Shuffle Performance in Spark
暂无分享,去创建一个
Spark [6] is a cluster framework that performs in-memory computing, with the goal of outperforming disk-based engines like Hadoop [2]. As with other distributed data processing platforms, it is common to collect data in a manyto-many fashion, a stage traditionally known as the shuffle phase. In Spark, many sources of inefficiency exist in the shuffle phase that, once addressed, potentially promise vast performance improvements. In this paper, we identify the bottlenecks in the execution of the current design, and propose alternatives that solve the observed problems. We evaluate our results in terms of application level throughput.
[1] Michael Stonebraker,et al. C-Store: A Column-oriented DBMS , 2005, VLDB.
[2] Junda Liu,et al. Multi-enterprise networking , 2000 .
[3] Amin Vahdat,et al. TritonSort: A Balanced Large-Scale Sorting System , 2011, NSDI.
[4] Michael J. Franklin,et al. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing , 2012, NSDI.