Parallel Systems from 1979 to 2014: 35 Years of Progress?
暂无分享,去创建一个
Summary form only given. In 1979 I started working in a world where semiconductor technology was advancing rapidly. The world was expecting that very shortly there would be a chip available which could store 64 kilobits (65,536 bits) of data, and already microprocessor based computers were available at a price where they could be bought by individuals. The semiconductor industry saw that there was great potential in building programmable systems. Most semiconductor companies took a lead from the mainstream computer industry and addressed the integration of conventional processors. The company I joined, Inmos, took a different approach. Inmos believed that a new programmable device, the transputer, could become a building block for electronic systems. A transputer would include a processor, memory and a communication system, allowing many transputers to used together in a programmable parallel system. In 1984, after five years of development, Inmos launched the first transputer product, together with the occam programming language. Occam addressed the (often ignored) problem of how to program a parallel system. In 2014 it would possible to integrate about 10,000 transputers into a single chip but the electronics industry has not progressed in this way, and it does not standardly build massively parallel systems. In this talk I start by looking at the basics of building parallel systems, at Tony Hoares Communicating Sequential Processes and the occam language, and at the Inmos transputer. I compare the simplicity and inexpensiveness of the transputer with the complexity and cost of some component parts of a modern processor. I also look at the reasons why the industry has developed ever more powerful uni-processors rather than parallel processors. I then turn to the state of computing in 2014 and to the challenges we face - the end of Den nard scaling, the slowing of Moores law, and the pressure to reduce power consumption. I make the case that the adoption of new computer architectures based on large scale parallelism will enable us to progress past these problems. Finally I speculate on what a new parallel architecture might look like, and in which applications it might first be used.