Improving packet processing performance in the ATLAS FELIX project: analysis and optimization of a memory-bounded algorithm
暂无分享,去创建一个
Experiments in high-energy physics (HEP) and related fields often impose constraints and challenges on data acquisition systems. As a result, these systems are implemented as unique mixtures of custom and commercial-off-the-shelf electronics (COTS), involving and connecting radiation-hard devices, large high-performance networks, and computing farms. FELIX, the Frontend Link Exchange, is a new PC-based general purpose data routing device for the data-acquisition system of the ATLAS experiment at CERN. Performance is a very crucial point for devices like FELIX, which have to be capable of processing tens of gigabyte of data per second. Thus it is important to understand the performance limitations for typical workloads on modern hardware. In this paper the analysis of FELIX packet processing algorithm is presented. The role played by the PC system's memory architecture in the overall data throughput is discussed and motivated, both by measurements and theoretical means. Finally, optimizations increasing the processing throughput by a factor larger than 10x are analyzed.
[1] C. Paillard,et al. The GBT Project , 2009 .
[2] Michael E. Jones,et al. The versatile link, a common project for super-LHC , 2009 .
[3] G. Aad,et al. The ATLAS Experiment at the CERN Large Hadron Collide , 2008 .
[4] Samuel Williams,et al. Roofline: an insightful visual performance model for multicore architectures , 2009, CACM.
[5] A. Goshaw. The ATLAS Experiment at the CERN Large Hadron Collider , 2008 .