Comparative Performance Analysis of RDMA-Enhanced Ethernet
暂无分享,去创建一个
Since the advent of high-performance distributed computing, system designers and end-users have been challenged with identifying and exploiting a communications infrastructure that is optimal for a diverse mix of applications in terms of performance, scalability, cost, wiring complexity, protocol maturity, versatility, etc. Today, the span of interconnect options for a cluster typically ranges from local-area networks such as Gigabit Ethernet to system-area networks such as InfiniBand. New technologies are emerging to bridge the performance gap (e.g. latency) between these classes of high-performance interconnects by adapting advanced communication methods such as remote direct-memory access (RDMA) to the Ethernet and IP environment. This paper provides an experimental performance analysis and comparison between three competing interconnect options for distributed computing: conventional Gigabit Ethernet; first-generation technologies for RMDAenhanced Gigabit Ethernet; and InfiniBand. Results are included from basic MPI-level communication benchmarks and several application-level benchmarks on a cluster of Linux servers and show that emerging technologies for low-latency IP/Ethernet communications have the potential to achieve performance levels rivaling costlier alternatives.
[1] Stephen Bailey,et al. The Architecture of Direct Data Placement (DDP) and Remote Direct Memory Access (RDMA) on Internet Protocols , 2005, RFC.
[2] Larry Carter,et al. NAS Benchmarks on the Tera MTA , 1998 .
[3] Message Passing Interface Forum. MPI: A message - passing interface standard , 1994 .