High Performance Message-passing InfiniBand Communication Device for Java HPC

Abstract MPJ Express is a Java messaging system that implements an MPI-like interface. It is used for writing parallel Java applications on High Performance Computing (HPC) hardware including commodity clusters. The software is capable of executing in multicore and cluster mode. In the cluster mode, it currently supports Ethernet and Myrinet based interconnects and provide specialized communication devices for these networks. One recent trend in distributed memory parallel hardware is the emergence of InfiniBand interconnect, which is a high-performance proprietary network and provides low latency and high bandwidth for parallel MPI applications. Currently there is no direct support available in Java (and hence MPJ Express) to exploit the performance benefits of InfiniBand networks. The only option to run distributed Java programs over InfiniBand networks is to rely on TCP/IP emulation layers like IP over InfiniBand (IPoIB) and Sockets Direct Protocol (SDP), which provide poor communication performance. To tackle this issue in the context of MPJ Express, this paper presents a low-level communication device called ibdev that can be used to execute parallel Java applications on InfiniBand clusters. MPJ Express is based on a layered architecture and hence users can opt to use ibdev at runtime on an InfiniBand equipped commodity cluster. ibdev improves Java application performance with access to InfiniBand hardware using native verbs API. Our performance evaluation reveals that MPJ Express achieves much better latency and bandwidth using this new device, compared to IPoIB and SDP. Improvement in communication performance is also evident in NAS parallel benchmark results where ibdev helps MPJ Express achieve better scalability and speedups as compared to IPoIB and SDP. The results show that it is possible to reduce the performance gap between Java and native languages with efficient support for low level communication libraries.

[1]  Jizhong Han,et al.  Enabling RDMA Capability of InfiniBand Network for Java Applications , 2008, 2008 International Conference on Networking, Architecture, and Storage.

[2]  Siddhartha Chatterjee,et al.  An evaluation of Java for numerical computing , 1999 .

[3]  Hongwei Zhang,et al.  A Performance Study of Java Communication Stacks over InfiniBand and Giga-bit Ethernet , 2007, 2007 IFIP International Conference on Network and Parallel Computing Workshops (NPC 2007).

[4]  Juan Touriño,et al.  Efficient Java Communication Libraries over InfiniBand , 2009, 2009 11th IEEE International Conference on High Performance Computing and Communications.

[5]  Hongwei Zhang,et al.  Jdib: Java Applications Interface to Unshackle the Communication Capabilities of InfiniBand Networks , 2007, 2007 IFIP International Conference on Network and Parallel Computing Workshops (NPC 2007).

[6]  Rob van Nieuwpoort,et al.  MPJ/Ibis: A Flexible and Efficient Message Passing Platform for Java , 2005, PVM/MPI.

[7]  Sabela Ramos,et al.  Java in the High Performance Computing arena: Research, practice and experience , 2013, Sci. Comput. Program..

[8]  Juan Touriño,et al.  F-MPJ: scalable Java message-passing communications on parallel systems , 2012, The Journal of Supercomputing.

[9]  Juan Touriño,et al.  Design of scalable Java message-passing communications over InfiniBand , 2011, The Journal of Supercomputing.

[10]  Geoffrey C. Fox,et al.  MPJ: MPI-like message passing for Java , 2000 .

[11]  Mark Baker,et al.  MPJ Express: Towards Thread Safe Java HPC , 2006, 2006 IEEE International Conference on Cluster Computing.

[12]  Dhabaleswar K. Panda,et al.  High performance RDMA-based MPI implementation over InfiniBand , 2003, ICS.

[13]  Hyun-Wook Jin,et al.  High performance MPI-2 one-sided communication over InfiniBand , 2004, IEEE International Symposium on Cluster Computing and the Grid, 2004. CCGrid 2004..