Artificial Intelligence (AI), in its current form, is a curiously incomplete technology. In certain essential respects it falls far short of human abilities. Some of these limitations may be the result of the limited processing power available on our current serial computers. A number of research groups are exploring parallel processing as a way of speeding up to execution of existing AI applications. A few researchers are exploring the possibility that some kind of massively parallel architecture might make possible a real breakthrough the way that AI problems are approached. AI programs are different in structure from the numerical programs that have received the most attention from the parallel processing community, and they present a different set of problems to anyone trying to apply parallel processing to them. Most numerical programs do a lot of processing on a small amount of data; they typically spend most of their time in a few easily identifiable inner loops. Because of their emphasis on symbolic, knowledge-based processing. AI programs typically must sift through vast amounts of stored information or vast numbers of possible solutions, but very little work is done in processing each i tem--usually just a comparison or two. We can divide the parallel approaches to AI into three broad categories, though the boundaries between them are often fuzzy: the general programming approach, applications of parallelism to the processing of specialized programming languages, and massively parallel active memory systems. The general programming approach attempts to detect and exploit any opportunities for concurrent execution that may exist in free-form heuristic AI programs written in some general purpose language such as LISP. In some of these systems, the programmer is expected to indicate where parallel processing is to occur; in others, the programmer pays no attention to issues of parallelism, and it is the system which must decide where parallelism is appropriate. In either case, the programmer is free to structure his program however he chooses, so it is possible to use search-guiding heuristics of arbitrary complexity. A critical question is just how much concurrency of execution is possible in such a system, since free-form code often contains serial dependencies among the various search paths and heuristics.
[1]
Robert H. Halstead,et al.
Implementation of multilisp: Lisp on a multiprocessor
,
1984,
LFP '84.
[2]
W. Daniel Hillis,et al.
The connection machine
,
1985
.
[3]
Peter M. Kogge.
Function-based computing and parallelism: A review
,
1985,
Parallel Comput..
[4]
C. D. Gelatt,et al.
Optimization by Simulated Annealing
,
1983,
Science.
[5]
Scott E. Fahlman,et al.
NETL: A System for Representing and Using Real-World Knowledge
,
1979,
CL.
[6]
Stephen Fickas,et al.
The Design and an Example Use of Hearsay-III
,
1981,
IJCAI.
[7]
Victor R. Lesser,et al.
The Hearsay-II Speech-Understanding System: Integrating Knowledge to Resolve Uncertainty
,
1980,
CSUR.
[8]
Victor Lesser,et al.
IN THE HEARSAY-II SPEECH UNDERSTANDING SYSTEM
,
1976
.
[9]
Salvatore J. Stolfo.
Five Parallel Algorithms for Production System Execution on the DADO Machine
,
1984,
AAAI.
[10]
Geoffrey E. Hinton,et al.
A Learning Algorithm for Boltzmann Machines
,
1985,
Cogn. Sci..
[11]
Allen Newell,et al.
Initial Assessment of Architectures for Production Systems
,
1984,
AAAI.
[12]
Geoffrey E. Hinton,et al.
Massively Parallel Architectures for AI: NETL, Thistle, and Boltzmann Machines
,
1983,
AAAI.
[13]
Scott E. Fahlman.
Design Sketch for a Million-Element NETL Machine
,
1980,
AAAI.