Computerized data has become critical to the survival of an enterprise. Companies must have a strategy for recovering their data should a disaster such as a fire destroy the primary data center. Current mechanisms offer data managers a stark choice: rely on affordable tape but risk the loss of a full day of data and face many hours or even days to recover, or have the benefits of a fully synchronized on-line remote mirror, but pay steep costs in both write latency and network bandwidth to maintain the mirror. In this paper, we argue that asynchronous mirroring, in which batches of updates are periodically sent to the remote mirror, can let data managers find a balance between these extremes. First, by eliminating the write latency issue, asynchrony greatly reduces the performance cost of a remote mirror. Second, by storing up batches of writes, asynchronous mirroring can avoid sending deleted or overwritten data and thereby reduce network bandwidth requirements. Data managers can tune the update frequency to trade network bandwidth against the potential loss of more data. We present Snap-Mirror, an asynchronous mirroring technology that leverages file system snapshots to ensure the consistency of the remote mirror and optimize data transfer. We use traces of production filers to show that even updating an asynchronous mirror every 15 minutes can reduce data transferred by 30% to 80%. We find that exploiting file system knowledge of deletions is critical to achieving any reduction for no-overwrite file systems such as WAFL and LFS. Experiments on a running system show that using file system metadata can reduce the time to identify changed blocks from minutes to seconds compared to purely logical approaches. Finally, we show that using SnapMirror to update every 30 minutes increases the response time of a heavily loaded system only 22%.
[1]
Robert S. Fabry,et al.
A fast file system for UNIX
,
1984,
TOCS.
[2]
John A. Kunze,et al.
A trace-driven analysis of the UNIX 4.2 BSD file system
,
1985,
SOSP '85.
[3]
Mary Baker,et al.
Measurements of a distributed file system
,
1991,
SOSP '91.
[4]
Mendel Rosenblum,et al.
The design and implementation of a log-structured file system
,
1991,
SOSP '91.
[5]
Sailesh Chutani,et al.
The Episode File System
,
1992
.
[6]
James Lau,et al.
File System Design for an NFS File Server Appliance
,
1994,
USENIX Winter.
[7]
Mahadev Satyanarayanan,et al.
Disconnected Operation in the Coda File System
,
1999,
Mobidata.
[8]
James J. Kistler.
Disconnected Operation in a Distributed File System
,
1995,
Lecture Notes in Computer Science.
[9]
Paul Mackerras,et al.
The rsync algorithm
,
1996
.
[10]
E. Grochowski,et al.
Future trends in hard disk drives
,
1996
.
[11]
Norman C. Hutchinson,et al.
Logical vs. physical file system backup
,
1999,
OSDI '99.