Because of their performance characteristics, highperformance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability.The study finds that existing software APIs for highperformance interconnects are focused on applications in highperformance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems, a custom library called NetIO has been developed and is compared against existing technologies.NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on an interchangeable back-end system which supports different interconnects. A libfabric-based back-end supports a wide range of fabric technologies including Infiniband. On the front-end side, NetIO supports several highlevel communication patterns that are found in typical data acquisition applications like client/server and publish/subscribe. Unlike other frameworks, NetIO distinguishes between highthroughput and low-latency communication, which is essential for applications with heterogeneous traffic patterns. This feature of NetIO allows experiments like ATLAS to use a single network for different traffic types like physics data or detector control.Benchmarks of NetIO in comparison with the message queue implementation ØMQ are presented. NetIO reaches up to 2x higher throughput on Ethernet and up to 3x higher throughput on FDR Infiniband compared to ØMQ on Ethernet. The latencies measured with NetIO are comparable to ØMQ latencies.
[1]
W Sliwinski,et al.
MIDDLEWARE TRENDS AND MARKET LEADERS 2011
,
2011
.
[2]
G. Aad,et al.
The ATLAS Experiment at the CERN Large Hadron Collide
,
2008
.
[3]
Sayantan Sur,et al.
A Brief Introduction to the OpenFabrics Interfaces - A New Network API for Maximizing High Performance Application Efficiency
,
2015,
2015 IEEE 23rd Annual Symposium on High-Performance Interconnects.
[4]
Dhabaleswar K. Panda,et al.
Sockets Direct Protocol over InfiniBand in clusters: is it beneficial?
,
2004,
IEEE International Symposium on - ISPASS Performance Analysis of Systems and Software, 2004.
[5]
Barbara M. Chapman,et al.
Introducing OpenSHMEM: SHMEM for the PGAS community
,
2010,
PGAS '10.
[6]
Pieter Hintjens,et al.
ZeroMQ: Messaging for Many Applications
,
2013
.
[7]
W Sliwinski,et al.
The new CERN Controls Middleware
,
2012
.
[8]
Keith D. Underwood,et al.
Intel® Omni-path Architecture: Enabling Scalable, High Performance Fabrics
,
2015,
2015 IEEE 23rd Annual Symposium on High-Performance Interconnects.
[9]
A. Goshaw.
The ATLAS Experiment at the CERN Large Hadron Collider
,
2008
.
[10]
Hsiao-Keng Jerry Chu,et al.
Transmission of IP over InfiniBand (IPoIB)
,
2006,
RFC.
[11]
Intel ® True Scale Fabric Architecture Enhanced HPC Architecture and Performance
,
2012
.
[12]
John Nagle,et al.
Congestion control in IP/TCP internetworks
,
1984,
CCRV.
[13]
Frederica Darema,et al.
The SPMD Model : Past, Present and Future
,
2001,
PVM/MPI.
[14]
Dror Goldenberg,et al.
Zero copy sockets direct protocol over infiniband-preliminary implementation and performance analysis
,
2005,
13th Symposium on High Performance Interconnects (HOTI'05).