This paper discusses an independent file facility, one that is not embedded in an operating system. The distributed file system (DFS) is so named because it is implemented on a cooperating set of server computers connected by a communications, network, which together create the illusion of a single, logical system for the creation, deletion, and random accessing of data. Access to the DFS can only be accomplished over the network; a computer (or, more precisely, a program running on one) that uses the DFS is called a client. This paper describes the division of responsibility between servers and clients. The basic tool for maintaining data consistency in these situations is the atomic property of transactions, which protects clients from system malfunctions and from the competing activities of other clients. Several cooperating clients may share a transaction. The DFS provides an unconventional locking mechanism between transactions that supports client caches and eliminates a novel form of deadly embrace. We have implemented and put into service a system based on these concepts.
[1]
J. G. Mitchell,et al.
Separating data from function in a distributed file system
,
1978
.
[2]
David K. Gifford,et al.
Weighted voting for replicated data
,
1979,
SOSP '79.
[3]
David K. Gifford.
Violet, an Experimental Decentralized System
,
1981,
Comput. Networks.
[4]
Butler W. Lampson,et al.
Crash Recovery in a Distributed Data Storage System
,
1981
.
[5]
Richard C. Holt,et al.
Some Deadlock Properties of Computer Systems
,
1972,
CSUR.
[6]
Irving L. Traiger,et al.
The notions of consistency and predicate locks in a database system
,
1976,
CACM.