In the prospect of employing 10 Gigabit Ethernet as networking technology for online systems and offline data analysis centers of High Energy Physics experiments, we performed a series of measurements with point-to-point data transfers over 10 Gigabit Ethernet links, using the network interface cards mounted on the PCI-Express bus of commodity PCs both as transmitters and receivers. In real operating conditions, the maximum achievable transfer rate through a network link is not only limited by the capacity of the link itself, but also by those of the memory and peripheral buses and by the ability of the CPUs and of the Operating System to handle packet processing and interrupts raised by the network interface cards in due time. Besides the TCP and UDP maximum data transfer throughputs, we also measured the CPU loads of the sender/receiver processes and of the interrupt and soft-interrupt handlers as a function of the packet size, either using standard or “jumbo” Ethernet frames. In addition, we also performed the same measurements by simultaneously reading data from Fibre Channel links and forwarding them through a 10 Gigabit Ethernet link, hence emulating the behavior of a disk server in a Storage Area Network exporting data to client machines via 10 Gigabit Ethernet.
[1]
John Nagle,et al.
Congestion control in IP/TCP internetworks
,
1995,
CCRV.
[2]
Dhabaleswar K. Panda,et al.
Benefits of high speed interconnects to cluster file systems: a case study with Lustre
,
2006,
Proceedings 20th IEEE International Parallel & Distributed Processing Symposium.
[3]
Jamal Hadi Salim,et al.
Beyond Softnet
,
2001,
Annual Linux Showcase & Conference.
[4]
P. Lebrun,et al.
LHC Design Report Vol.1: The LHC Main Ring
,
2004
.
[5]
D. Galli,et al.
High rate packets transmission on Ethernet LAN using commodity hardware
,
2005,
14th IEEE-NPSS Real Time Conference, 2005..