Enabling high-speed asynchronous data extraction and transfer using DART

As the complexity and scale of applications grow, managing and transporting the large amounts of data they generate are quickly becoming a significant challenge. Moreover, the interactive and real‐time nature of emerging applications, as well as their increasing runtime, make online data extraction and analysis a key requirement in addition to traditional data I/O and archiving. To be effective, online data extraction and transfer should impose minimal additional synchronization requirements, should have minimal impact on the computational performance and communication latencies, maintain overall quality of service, and ensure that no data is lost. In this paper we present Decoupled and Asynchronous Remote Transfers (DART), an efficient data transfer substrate that effectively addresses these requirements. DART is a thin software layer built on RDMA technology to enable fast, low‐overhead, and asynchronous access to data from a running simulation, and supports high‐throughput, low‐latency data transfers. DART has been integrated with applications simulating fusion plasma in a Tokamak, being developed at the Center for Plasma Edge Simulation (CPES), a DoE Office of Fusion Energy Science (OFES) Fusion Simulation Project (FSP). A performance evaluation using the Gyrokinetic Toroidal Code and XGC‐1 particle‐in‐cell‐based FSP simulations running on the Cray XT3/XT4 system at Oak Ridge National Laboratory demonstrates how DART can effectively and efficiently offload simulation data to local service and remote analysis nodes, with minimal overheads on the simulation itself. Copyright © 2010 John Wiley & Sons, Ltd.

[1]  Douglas Thain,et al.  The Kangaroo approach to data movement on the Grid , 2001, Proceedings 10th IEEE International Symposium on High Performance Distributed Computing.

[2]  Marianne Winslett,et al.  Server-Directed Collective I/O in Panda , 1995, Proceedings of the IEEE/ACM SC95 Conference.

[3]  T. Hahm,et al.  Turbulent Transport Reduction by Zonal Flows: Massively Parallel Simulations , 1998 .

[4]  Karsten Schwan,et al.  LIVE data workspace: A flexible, dynamic and extensible platform for petascale applications , 2007, 2007 IEEE International Conference on Cluster Computing.

[5]  R. Samtaney,et al.  Grid -Based Parallel Data Streaming implemented for the Gyrokinetic Toroidal Code , 2003, ACM/IEEE SC 2003 Conference (SC'03).

[6]  Alok N. Choudhary,et al.  High-performance I/O for massively parallel computers: problems and prospects , 1994, Computer.

[7]  Derek Simmel,et al.  PDIO: High-Performance Remote File I/O for Portals-Enabled Compute Nodes , 2006, PDPTA.

[8]  Frank B. Schmuck,et al.  GPFS: A Shared-Disk File System for Large Computing Clusters , 2002, FAST.

[9]  Choong-Seock Chang,et al.  Numerical study of neoclassical plasma pedestal in a tokamak geometry , 2004 .

[10]  Margo I. Seltzer,et al.  Structure and Performance of the Direct Access File System , 2002, USENIX Annual Technical Conference, General Track.

[11]  Lustre : A Scalable , High-Performance File System Cluster , 2003 .

[12]  Rajeev Thakur,et al.  Data sieving and collective I/O in ROMIO , 1998, Proceedings. Frontiers '99. Seventh Symposium on the Frontiers of Massively Parallel Computation.

[13]  Keith D. Underwood,et al.  Implementation and Performance of Portals 3.3 on the Cray XT3 , 2005, 2005 IEEE International Conference on Cluster Computing.

[14]  David Kotz,et al.  Disk-directed I/O for MIMD multiprocessors , 1994, OSDI '94.