A study of hardware assisted IP over InfiniBand and its impact on enterprise data center performance

High-performance sockets implementations such as the Sockets Direct Protocol (SDP) have traditionally showed major performance advantages compared to the TCP/IP stack over InfiniBand (IPoIB). These stacks bypass the kernel-based TCP/IP and take advantage of network hardware features, providing enhanced performance. SDP has excellent performance but limited utility as only applications relying on the TCP/IP sockets API can use it and other IP stack uses (IPSec, UDP, SCTP) or TCP layer modifications (iSCSI) cannot benefit from it. Recently, newer generations of InfiniBand adapters, such as ConnectX from Mellanox, have provided hardware support for the IP stack itself, such as Large Send Offload and Large Receive Offload. As such high performance socket networks are likely to be deployed or converged with existing Ethernet networking solutions, the performance of such technologies is important to assess. In this paper we take a first look at the performance advantages provided by these offload techniques and compare them to SDP. Our micro-benchmarks and enterprise data-center experiments show that hardware assisted IPoIB can provide competitive performance with SDP and even outperform it in some cases.

[1]  Pavan Balaji,et al.  Evaluation of ConnectX Virtual Protocol Interconnect for Data Centers , 2009, 2009 15th International Conference on Parallel and Distributed Systems.

[2]  Dhabaleswar K. Panda,et al.  Sockets Direct Protocol over InfiniBand in clusters: is it beneficial? , 2004, IEEE International Symposium on - ISPASS Performance Analysis of Systems and Software, 2004.

[3]  Ryan E. Grant,et al.  An Analysis of QoS Provisioning for Sockets Direct Protocol vs. IPoIB over Modern InfiniBand Networks , 2008, 2008 International Conference on Parallel Processing - Workshops.

[4]  Dror Goldenberg,et al.  Transparently Achieving Superior Socket Performance Using Zero Copy Socket Direct Protocol over 20Gb/s InfiniBand Links , 2005, 2005 IEEE International Conference on Cluster Computing.

[5]  Randall R. Stewart,et al.  Stream Control Transmission Protocol , 2000, RFC.

[6]  Jizhong Han,et al.  Enabling RDMA Capability of InfiniBand Network for Java Applications , 2008, 2008 International Conference on Networking, Architecture, and Storage.

[7]  Dhabaleswar K. Panda,et al.  Advanced Flow-control Mechanisms for the Sockets Direct Protocol over InfiniBand , 2007, 2007 International Conference on Parallel Processing (ICPP 2007).

[8]  Julian Satran,et al.  Internet Small Computer Systems Interface (iSCSI) , 2004, RFC.

[9]  Dhabaleswar K. Panda,et al.  Head-to-TOE Evaluation of High-Performance Sockets over Protocol Offload Engines , 2005, 2005 IEEE International Conference on Cluster Computing.

[10]  Hugo Krawczyk,et al.  A Security Architecture for the Internet Protocol , 1999, IBM Syst. J..

[11]  Hyun-Wook Jin,et al.  Asynchronous zero-copy communication for synchronous sockets in the sockets direct protocol (SDP) over InfiniBand , 2006, Proceedings 20th IEEE International Parallel & Distributed Processing Symposium.

[12]  Stephen T. Kent,et al.  Security Architecture for the Internet Protocol , 1998, RFC.

[13]  Calton Pu,et al.  High Performance Sockets and RPC over Virtual Interface (VI) Architecture , 1999, CANPC.

[14]  Dhabaleswar K. Panda,et al.  RDMA over Ethernet — A preliminary study , 2009, 2009 IEEE International Conference on Cluster Computing and Workshops.

[15]  Hongwei Zhang,et al.  A Performance Study of Java Communication Stacks over InfiniBand and Giga-bit Ethernet , 2007, 2007 IFIP International Conference on Network and Parallel Computing Workshops (NPC 2007).

[16]  Arkady Kanevsky,et al.  Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options , 2009, 2009 17th IEEE Symposium on High Performance Interconnects.