Some Computer Organizations and Their Effectiveness

A hierarchical model of computer organizations is developed, based on a tree model using request/service type resources as nodes. Two aspects of the model are distinguished: logical and physical. General parallel- or multiple-stream organizations are examined as to type and effectiveness-especially regarding intrinsic logical difficulties. The overlapped simplex processor (SISD) is limited by data dependencies. Branching has a particularly degenerative effect. The parallel processors [single-instruction stream-multiple-data stream (SIMD)] are analyzed. In particular, a nesting type explanation is offered for Minsky's conjecture-the performance of a parallel processor increases as log M instead of M (the number of data stream processors). Multiprocessors (MIMD) are subjected to a saturation syndrome based on general communications lockout. Simplified queuing models indicate that saturation develops when the fraction of task time spent locked out (L/E) approaches 1/n, where n is the number of processors. Resources sharing in multiprocessors can be used to avoid several other classic organizational problems.

[1]  David E. Muller,et al.  Complexity in Electronic Switching Circuits , 1956, IRE Trans. Electron. Comput..

[2]  Caxton C. Foster,et al.  Uncoupling Central Processor and Storage Device Speeds , 1971, Computer/law journal.

[3]  D. N. Senzig Observations on high-performance machines , 1967, AFIPS '67 (Fall).

[4]  H. M. Ernst,et al.  Planning a Computer System , 1964 .

[5]  Andrei N. Kolmogorov,et al.  Logical basis for information theory and probability theory , 1968, IEEE Trans. Inf. Theory.

[6]  Michael J. Flynn,et al.  Detection and Parallel Execution of Independent Instructions , 1970, IEEE Transactions on Computers.

[7]  J. L. Smith,et al.  Concurrently operating computer systems , 1959, IFIP Congress.

[8]  Michael J. Flynn Shared Internal Resources in Multiprocessor , 1971, IFIP Congress.

[9]  M. Lehman,et al.  A survey of problems and preliminary results concerning parallel processing and parallel processors , 1966 .

[10]  Robert Wilcox Cook Algorithmic measures , 1970 .

[11]  P. Dreyfus Programming design features of the GAMMA 60 computer , 1958, AIEE-ACM-IRE '58 (Eastern).

[12]  Michael J. Flynn,et al.  Intrinsic multiprocessing , 1967, AFIPS '67 (Spring).

[13]  G. Amdhal,et al.  Validity of the single processor approach to achieving large scale computing capabilities , 1967, AFIPS '67 (Spring).

[14]  J. E. Thornton,et al.  Parallel operation in the control data 6600 , 1964, AFIPS '64 (Fall, part II).

[15]  Michael J. Flynn,et al.  Very high-speed computing systems , 1966 .

[16]  Arthur J. Bernstein,et al.  Analysis of Programs for Parallel Processing , 1966, IEEE Trans. Electron. Comput..

[17]  Marshall C. Pease,et al.  An Adaptation of the Fast Fourier Transform for Parallel Processing , 1968, JACM.

[18]  Stuart E. Madnick Multi-processor software lockout , 1968, ACM '68.

[19]  C. M. Berners-Lee Planning a Computer System , 1962 .

[20]  Werner Buchholz,et al.  Planning a Computer System: Project Stretch , 1962 .

[21]  Martin D. Davis,et al.  Computability and Unsolvability , 1959, McGraw-Hill Series in Information Processing and Computers.

[22]  Harold S. Stone The Organization of High-Speed Memory for Parallel Block Transfer of Data , 1970, IEEE Transactions on Computers.

[23]  Robert Edward Noonan Computer programming with a dynamic algebra , 1971 .

[24]  Marshall C. Pease Matrix Inversion Using Parallel Processing , 1967, JACM.