The "MIND" scalable PIM architecture

MIND (Memory, Intelligence, and Network Device) is an advanced parallel computer architecture for high performance computing and scalable embedded processing. It is a Processor-in-Memory (PIM) architecture integrating both DRAM bit cells and CMOS logic devices on the same silicon die. MIND is multicore with multiple memory/processor nodes on each chip and supports global shared memory across systems of MIND components. MIND is distinguished from other PIM architectures in that it incorporates mechanisms for efficient support of a global parallel execution model based on the semantics of message-driven multithreaded split-transaction processing. MIND is designed to operate either in conjunction with other conventional microprocessors or in standalone arrays of like devices. It also incorporates mechanisms for fault tolerance, real time execution, and active power management. This paper describes the major elements and operational methods of the MIND architecture.

[1]  W. Daniel Hillis,et al.  The connection machine , 1985 .

[2]  Thomas L. Sterling,et al.  Analysis and Modeling of Advanced PIM Architecture Design Tradeoffs , 2004, Proceedings of the ACM/IEEE SC2004 Conference.

[3]  Katherine Yelick,et al.  Intelligent RAM (IRAM) , 1997 .

[4]  Chun Chen,et al.  The architecture of the DIVA processing-in-memory chip , 2002, ICS '02.

[5]  Guang R. Gao,et al.  Hybrid technology multithreaded architecture , 1996, Proceedings of 6th Symposium on the Frontiers of Massively Parallel Computation (Frontiers '96).

[6]  Tom Blank,et al.  The MasPar MP-1 architecture , 1990, Digest of Papers Compcon Spring '90. Thirty-Fifth IEEE Computer Society International Conference on Intellectual Leverage.

[7]  Kenneth E. Batcher,et al.  Design of a Massively Parallel Processor , 1980, IEEE Transactions on Computers.

[8]  Katherine Yelick,et al.  A Case for Intelligent RAM: IRAM , 1997 .

[9]  Seth Copen Goldstein,et al.  TAM - A Compiler Controlled Threaded Abstract Machine , 1993, J. Parallel Distributed Comput..

[10]  Thomas L. Sterling,et al.  Gilgamesh: A Multithreaded Processor-In-Memory Architecture for Petaflops Computing , 2002, ACM/IEEE SC 2002 Conference (SC'02).

[11]  Michael D. Noakes,et al.  The J-machine multicomputer: an architectural evaluation , 1993, ISCA '93.

[12]  Seth Copen Goldstein,et al.  Active messages: a mechanism for integrating communication and computation , 1998, ISCA '98.

[13]  Seth Copen Goldstein,et al.  Active messages: a mechanism for integrating communication and computation , 1998, ISCA '98.

[14]  T. von Eicken,et al.  Parallel programming in Split-C , 1993, Supercomputing '93.

[15]  Henry G. Baker,et al.  Actors and Continuous Functionals , 1978, Formal Description of Programming Concepts.

[16]  Robert H. Halstead,et al.  MULTILISP: a language for concurrent symbolic computation , 1985, TOPL.

[17]  Allan Porterfield,et al.  The Tera computer system , 1990, ICS '90.

[18]  Kenneth E. Batcher STARAN parallel processor system hardware , 1974, AFIPS '74.

[19]  Christoforos E. Kozyrakis,et al.  A case for intelligent RAM , 1997, IEEE Micro.

[20]  Burton J. Smith Architecture And Applications Of The HEP Multiprocessor Computer System , 1982, Optics & Photonics.

[21]  Maya Gokhale,et al.  Processing in Memory: The Terasys Massively Parallel PIM Array , 1995, Computer.