Architectures systoliques pour le traitement du signal : bilan et perspectives

RésuméĽarticle présente brièvement ľévolutiondes architectures systoliques, àtrovers ľexemplede la résolution ďun problèmeaux moindres carréstout particulièrementimportant en traitement du signal. On s’attache àmontrer les avantages et les limites du modèle,et on décritquelques réalisationsarchitecturales. Une bibliographie commentéeconclut ľarticle.AbstractIn this paper we briefly survey systolic algorithms and architectures through the example of the least square solution of a beamforming problem. We show the advantages and the limits of the systolic model of computation. We describe some existing systolic architectures. Finally, we provide the reader with an annotated list of references.

[1]  Jean-Marc Delosme Algorithms for finite shift-rank processes , 1982 .

[2]  David W. L. Yen,et al.  Systolic Processing and an Implementation for Signal and Image Processing , 1982, IEEE Transactions on Computers.

[3]  H. T. Kung,et al.  Fault-Tolerance and Two-Level Pipelining in VLSI Systolic Arrays , 1983 .

[4]  Yves Robert,et al.  Automata networks in computer science : theory and applications , 1987 .

[5]  Patrice Quinton,et al.  Systolic algorithms and architectures , 1987 .

[6]  Sun-Yuan Kung,et al.  VLSI design for massively parallel signal processors , 1983, Microprocess. Microsystems.

[7]  J. Litva,et al.  Application of the warp processor to adaptive beamforming , 1990 .

[8]  Dan I. Moldovan,et al.  Partitioning and Mapping Algorithms into Fixed Size Systolic Arrays , 1986, IEEE Transactions on Computers.

[9]  Patrice Quinton,et al.  The systematic design of systolic arrays , 1987 .

[10]  H. T. Kung,et al.  Supporting systolic and memory communication in iWarp , 1990, ISCA '90.

[11]  A. Haug,et al.  The Martin Marietta advanced systolic array processor , 1988, Proceedings., 2nd Symposium on the Frontiers of Massively Parallel Computation.

[12]  Jacob A. Abraham,et al.  Algorithm-Based Fault Tolerance for Matrix Operations , 1984, IEEE Transactions on Computers.

[13]  H. T. Kung,et al.  The Design of Special-Purpose VLSI Chips , 1980, Computer.

[14]  Hen-Geul Yeh Kalman filtering and systolic processors , 1986, ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing.

[15]  J.A. Abraham,et al.  Fault-tolerant matrix arithmetic and signal processing on highly concurrent computing structures , 1986, Proceedings of the IEEE.

[16]  Yves Robert,et al.  Spacetime-minimal systolic arrays for Gaussian elimination and the algebraic path problem , 1990, Parallel Comput..

[17]  Jean-Marc Delosme,et al.  Highly concurrent computing structures for matrix arithmetic and signal processing , 1982, Computer.

[18]  J. G. McWhirter,et al.  Recursive Least-Squares Minimization Using A Systolic Array , 1983, Optics & Photonics.

[19]  E. L. Cloud,et al.  The geometric arithmetic parallel processor , 1988, Proceedings., 2nd Symposium on the Frontiers of Massively Parallel Computation.

[20]  Kai Hwang,et al.  Partitioned Matrix Algorithms for VLSI Arithmetic Systems , 1982, IEEE Trans. Computers.

[21]  Jean-Michel Muller,et al.  Some results about on-line computation of functions , 1989, Proceedings of 9th Symposium on Computer Arithmetic.

[22]  C. R. Ward,et al.  Practical realizations of parallel adaptive beamforming systems , 1990 .

[23]  Philippe Clauss,et al.  Calculus of space-optimal mappings of systolic algorithms on processor arrays , 1990, J. VLSI Signal Process..

[24]  H. T. Kung Why systolic architectures? , 1982, Computer.

[25]  H. T. Kung,et al.  Wafer-scale integration and two-level pipelined implementations of systolic arrays , 1984, J. Parallel Distributed Comput..

[26]  Peter R. Cappello,et al.  Completely-pipelined architectures for digital signal processing , 1983 .

[27]  T. Kailath,et al.  VLSI and Modern Signal Processing , 1984 .

[28]  H. T. Kung,et al.  The Warp Computer: Architecture, Implementation, and Performance , 1987, IEEE Transactions on Computers.

[29]  H. T. Kung Warp experience: we can map computations onto a parallel computer efficiently , 1988, ICS '88.