A system of processes in which the interactions are solely through messages is often called loosely-coupled. Such systems are attractive from a programming viewpoint. They are designed by decomposing a specification into its separable concerns, each of which could then be implemented by a process; the operation of the system can be understood by asserting properties of the message sequences transmitted among the component processes. A key attribute of loosely-coupled systems is a guarantee that a message that has been sent cannot be unsent. As a consequence, a process can commence its computation upon receiving a message, with the guarantee that no future message it receives will require it to undo its previous computations. Processes that communicate through shared variables, where a shared variable may be read from/written to by an arbitrary number of processes, are often called tightly-coupled. In contrast to loosely-coupled systems, designs of tightly-coupled systems typically require deeper analysis. Since speeds of the component processes are assumed to be nonzero and finite, but otherwise arbitrary, it is necessary to analyze all possible execution sequences, however unlikely some of them may be, to guarantee the absence of “race conditions.” Special protocols for mutual exclusion are often required for a process to access shared-variables in an exclusive manner. Yet, shared-variables often provide succinct, and even elegant, solutions; for instance, broadcasting a message can often be implemented by storing the message in a variable that can be read by every process.
[1]
Guy L. Steele,et al.
Making asynchronous parallelism safe for the world
,
1989,
POPL '90.
[2]
Ehud Shapiro,et al.
The family of concurrent logic programming languages
,
1989,
CSUR.
[3]
Jayadev Misra.
Specifying Concurrent Objects as Communicating Processes
,
1990,
Sci. Comput. Program..
[4]
Amir Pnueli,et al.
Communication with directed logic variables
,
1991,
POPL '91.
[5]
K. Mani Chandy,et al.
An introduction to parallel programming
,
1992
.
[6]
Keshav Pingali,et al.
I-structures: Data structures for parallel computing
,
1986,
Graph Reduction.
[7]
Anoop Gupta,et al.
Design of scalable shared-memory multiprocessors: the DASH approach
,
1990,
Digest of Papers Compcon Spring '90. Thirty-Fifth IEEE Computer Society International Conference on Intellectual Leverage.
[8]
Michael J. Maher,et al.
Replay, recovery, replication, and snapshots of nondeterministic concurrent programs
,
1991,
PODC '91.
[9]
C. A. R. Hoare,et al.
Monitors: an operating system structuring concept
,
1974,
CACM.
[10]
Irving L. Traiger,et al.
The notions of consistency and predicate locks in a database system
,
1976,
CACM.
[11]
K. Mani Chandy,et al.
Parallel program design - a foundation
,
1988
.
[12]
Anoop Gupta,et al.
The directory-based cache coherence protocol for the DASH multiprocessor
,
1990,
ISCA '90.
[13]
Stephen Taylor,et al.
A Primer for Program Composition Notation
,
1990
.