Hypercube algorithms are presented for distributed block-matrix operations. These algorithms are based entirely on an interconnection scheme which involves two orthogonal sets of binary trees. This switching topology makes use of all hypercube interconnection links in a synchronized manner.
An efficient novel matrix-vector multiplication algorithm based on this technique is described. Also, matrix transpose operations moving just pointers rather than actual data, have been implemented for some applications by taking advantage of the above tree structures. For the cases where actual physical vector and matrix transposes are needed, possible techniques, including extensions of the above scheme, are discussed.
The algorithms support submatrix partitionings of the data, instead of being limited to row and/or column partitionings. This allows efficient use of nodal vector processors as well as shorter interprocessor communication packets. It also produces a favorable data distribution for applications which involve near neighbor operations such as image processing. The algorithms are based on an interprocessor communication paradigm which involves variable length, tagged block data transfers. They have been implemented on an Intel iPSC hypercube system with the support of the Hypercube Library developed at the Christian Michelsen Institute.
[1]
R. M. Chamberlain,et al.
Gray codes, Fast Fourier Transforms and hypercubes
,
1988,
Parallel Comput..
[2]
Charles L. Seitz,et al.
The cosmic cube
,
1985,
CACM.
[3]
Marshall C. Pease,et al.
The Indirect Binary n-Cube Microprocessor Array
,
1977,
IEEE Transactions on Computers.
[4]
S. Lennart Johnsson,et al.
Communication Efficient Basic Linear Algebra Computations on Hypercube Architectures
,
1987,
J. Parallel Distributed Comput..
[5]
Alfred V. Aho,et al.
The Design and Analysis of Computer Algorithms
,
1974
.
[6]
Dirk Grunwald,et al.
Benchmarking hypercube hardware and software
,
1986
.
[7]
Oliver A. McBryan,et al.
Hypercube Algorithms and Implementations
,
1985,
PPSC.