Comparative Performance Analysis of RDMA-Enhanced Ethernet

Since the advent of high-performance distributed computing, system designers and end-users have been challenged with identifying and exploiting a communications infrastructure that is optimal for a diverse mix of applications in terms of performance, scalability, cost, wiring complexity, protocol maturity, versatility, etc. Today, the span of interconnect options for a cluster typically ranges from local-area networks such as Gigabit Ethernet to system-area networks such as InfiniBand. New technologies are emerging to bridge the performance gap (e.g. latency) between these classes of high-performance interconnects by adapting advanced communication methods such as remote direct-memory access (RDMA) to the Ethernet and IP environment. This paper provides an experimental performance analysis and comparison between three competing interconnect options for distributed computing: conventional Gigabit Ethernet; first-generation technologies for RMDAenhanced Gigabit Ethernet; and InfiniBand. Results are included from basic MPI-level communication benchmarks and several application-level benchmarks on a cluster of Linux servers and show that emerging technologies for low-latency IP/Ethernet communications have the potential to achieve performance levels rivaling costlier alternatives.