The indirect k-ary n-cube for a vector processing environment

Abstract In an earlier paper we introduced an indirect binary n -cube memory server network which has adaptive properties making it useful in a parallel vector processing environment. The memory server network, due to a special choice in the design of the basic switch node, has the property that N vector processors issuing vector fetches with similar strides are forced into lock step after an initial startup investment. In this paper we extend this work to the case of the indirect k -any n -cube. As this network has a more favorable memory latency scaling of log k N , one expects that the short vector performance will be improved as k is increased for a given N . We find this to be the case. We also find that the cost of the memory server system scales in a manner which prefers modest values of k above 2.