Layout-aware I/O Scheduling for terabits data movement

Many science facilities, such as the Department of Energy's Leadership Computing Facilities and experimental facilities including the Spallation Neutron Source, Stanford Linear Accelerator Center, and Advanced Photon Source, produce massive amounts of experimental and simulation data. These data are often shared among the facilities and with collaborating institutions. Moving large datasets over the wide-area network (WAN) is a major problem inhibiting collaboration. Next-generation, terabit-networks will help alleviate the problem, however, the parallel storage systems on the endsystem hosts at these institutions can become a bottleneck for terabit data movement. The parallel storage system (PFS) is shared by simulation systems, experimental systems, analysis and visualization clusters, in addition to wide-area data movers. These competing uses often induce temporary, but significant, I/O load imbalances on the storage system, which impact the performance of all the users. The problem is a serious concern because some resources are more expensive (e.g. super computers) or have time-critical deadlines (e.g. experimental data from a light source), but parallel file systems handle all requests fairly even if some storage servers are under heavy load. This paper investigates the problem of competing workloads accessing the parallel file system and how the performance of wide-area data movement can be improved in these environments. First, we study the I/O load imbalance problems using actual I/O performance data collected from the Spider storage system at the Oak Ridge Leadership Computing Facility. Second, we present I/O optimization solutions with layout-awareness on end-system hosts for bulk data movement. With our evaluation, we show that our I/O optimization techniques can avoid the I/O congested disk groups, improving storage I/O times on parallel storage systems for terabit data movement.

[1]  William E. Allcock,et al.  The Globus Striped GridFTP Framework and Server , 2005, ACM/IEEE SC 2005 Conference (SC'05).

[2]  Galen M. Shipman,et al.  Workload characterization of a leadership class storage cluster , 2010, 2010 5th Petascale Data Storage Workshop (PDSW '10).

[3]  Qi Zhang,et al.  Efficient management of idleness in storage systems , 2009, TOS.

[4]  Scott Klasky,et al.  Runtime I/O Re-Routing + Throttling on HPC Storage , 2013, HotStorage.

[5]  Brian Tierney,et al.  Protocols for wide-area data-intensive applications: Design and performance issues , 2012, 2012 International Conference for High Performance Computing, Networking, Storage and Analysis.

[6]  David A Dillow,et al.  Lessons Learned in Deploying the World’s Largest Scale Lustre File System , 2010 .

[7]  Stephen W. Poole,et al.  A technique for moving large data sets over high-performance long distance networks , 2011, 2011 IEEE 27th Symposium on Mass Storage Systems and Technologies (MSST).

[8]  Don E Maxwell,et al.  Monitoring Tools for Large Scale Systems , 2010 .

[9]  Karsten Schwan,et al.  Managing Variability in the IO Performance of Petascale Storage Systems , 2010, 2010 ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis.

[10]  George Bosilca,et al.  The Common Communication Interface (CCI) , 2011, 2011 IEEE 19th Annual Symposium on High Performance Interconnects.