Reliability and Failure Impact Analysis of Distributed Storage Systems with Dynamic Refuging

In recent data centers, large-scale storage systems storing big data comprise thousands of large-capacity drives. Our goal is to establish a method for building highly reliable storage systems using more than a thousand low-cost large-capacity drives. Some large-scale storage systems protect data by erasure coding to prevent data loss. As the redundancy level of erasure coding is increased, the probability of data loss will decrease, but the increase in normal data write operation and additional storage for coding will be incurred. We therefore need to achieve high reliability at the lowest possible redundancy level. There are two concerns regarding reliability in large-scale storage systems: (i) as the number of drives increases, systems are more subject to multiple drive failures and (ii) distributing stripes among many drives can speed up the rebuild time but increase the risk of data loss due to multiple drive failures. If data loss occurs by multiple drive failure, it affects many users using a storage system. These concerns were not addressed in prior quantitative reliability studies based on realistic settings. In this work, we analyze the reliability of large-scale storage systems with distributed stripes, focusing on an effective rebuild method which we call Dynamic Refuging. Dynamic Refuging rebuilds failed blocks from those with the lowest redundancy and strategically selects blocks to read for repairing lost data. We modeled the dynamic change of amount of storage at each redundancy level caused by multiple drive failures, and performed reliability analysis with Monte Carlo simulation using realistic drive failure characteristics. We showed a failure impact model and a method for localizing the failure. When stripes with redundancy level 3 were sufficiently distributed and rebuilt by Dynamic Refuging, the proposed technique turned out to scale well, and the probability of data loss decreased by two orders of magnitude for systems with a thousand drives compared to normal RAID. The appropriate setting of a stripe distribution level could localize the failure. key words: erasure coding, highly redundant storage systems, reliability, rebuild, Monte Carlo simulation

[1]  Marek Karpinski,et al.  An XOR-based erasure-resilient coding scheme , 1995 .

[2]  Joseph G. Slember,et al.  GPFS Scans 10 Billion Files in 43 Minutes , 2011 .

[3]  Jehoshua Bruck,et al.  EVENODD: An Efficient Scheme for Tolerating Double Disk Failures in RAID Architectures , 1995, IEEE Trans. Computers.

[4]  Hai Jin,et al.  Reliable cluster computing with a new checkpointing RAID-x architecture , 2000, Proceedings 9th Heterogeneous Computing Workshop (HCW 2000) (Cat. No.PR00556).

[5]  Adam Leventhal,et al.  Triple-Parity RAID and Beyond , 2009, ACM Queue.

[6]  Ethan L. Miller,et al.  Evaluation of distributed recovery in large-scale storage systems , 2004, Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004..

[7]  Hai Jin,et al.  RAID-x: a new distributed disk array for I/O-centric cluster computing , 2000, Proceedings the Ninth International Symposium on High-Performance Distributed Computing.

[8]  James S. Plank,et al.  Mean Time to Meaningless: MTTDL, Markov Models, and Storage System Reliability , 2010, HotStorage.

[9]  André Brinkmann,et al.  Reliability Analysis of Declustered-Parity RAID 6 with Disk Scrubbing and Considering Irrecoverable Read Errors , 2010, 2010 IEEE Fifth International Conference on Networking, Architecture, and Storage.

[10]  Michael G. Pecht,et al.  A Highly Accurate Method for Assessing Reliability of Redundant Arrays of Inexpensive Disks (RAID) , 2009, IEEE Transactions on Computers.

[11]  John Gantz,et al.  The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East , 2012 .

[12]  Jie Li,et al.  Reliability analysis of disk array organizations by considering uncorrectable bit errors , 1997, Proceedings of SRDS'97: 16th IEEE Symposium on Reliable Distributed Systems.

[13]  J. Sikora Disk failures in the real world : What does an MTTF of 1 , 000 , 000 hours mean to you ? , 2007 .

[14]  Kathrin Peter Reliability Study of Coding Schemes for Wide-Area Distributed Storage Systems , 2011, 2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing.

[15]  Kazunori Ueda,et al.  Reliability Analysis of Highly Redundant Distributed Storage Systems with Dynamic Refuging , 2015, 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing.

[16]  Jiri Schindler,et al.  Beyond MTTDL: A Closed-Form RAID 6 Reliability Equation , 2014, TOS.