In our quest to build intelligent machines, we have but one naturally occurring model: the human brain. It follows that one natural idea for artificial intelligence (AI) is to simulate the functioning of the brain directly on a computer. Indeed, the idea of building an intelligent machine out of artificial neurons has been around for quite some time. Some early results on brain-line mechanisms were achieved by [18], and other researchers pursued this notion through the next two decades, e.g., [1, 4, 19, 21, 24]. Research in neural networks came to a virtual halt in the 1970s, however, when the networks under study were shown to be very weak computationally. Recently, there has been a resurgence of interest in neural networks. There are several reasons for this, including the appearance of faster digital computers on which to simulate larger networks, interest in building massively parallel computers, and most importantly, the discovery of powerful network learning algorithms.
The new neural network architectures have been dubbed connectionist architectures. For the most part, these architectures are not meant to duplicate the operation of the human brain, but rather receive inspiration from known facts about how the brain works. They are characterized by
Large numbers of very simple neuron-like processing elements;
Large numbers of weighted connections between the elements—the weights on the connections encode the knowledge of a network;
Highly parallel, distributed control; and
Emphasis on learning internal representations automatically.
Connectionist researchers conjecture that thinking about computation in terms of the brain metaphor rather than the digital computer metaphor will lead to insights into the nature of intelligent behavior.
Computers are capable of amazing feats. They can effortlessly store vast quantities of information. Their circuits operate in nanoseconds. They can perform extensive arithmetic calculations without error. Humans cannot approach these capabilities. On the other hand, humans routinely perform simple tasks such as walking, talking, and commonsense reasoning. Current AI systems cannot do any of these things better than humans. Why not? Perhaps the structure of the brain is somehow suited to these tasks, and not suited to tasks like high-speed arithmetic calculation. Working under constraints suggested by the brain may make traditional computation more difficult, but it may lead to solutions to AI problems that would otherwise be overlooked.
What constraints, then, does the brain offer us? First of all, individual neurons are extremely slow devices when compared to their counterparts in digital computers. Neurons operate in the millisecond range, an eternity to a VLSI designer. Yet, humans can perform extremely complex tasks, like interpreting a visual scene or understanding a sentence, in just a tenth of a second. In other words, we do in about a hundred steps what current computers cannot do in ten million steps. How can this be possible? Unlike a conventional computer, the brain contains a huge number of processing elements that act in parallel. This suggests that in our search for solutions, we look for massively parallel algorithms that require no more than 100 processing steps [9].
Also, neurons are failure-prone devices. They are constantly dying (you have certainly lost a few since you began reading this article), and their firing patterns are irregular. Components in digital computers, on the other hand, must operate perfectly. Why? Such components store bits of information that are available nowhere else in the computer: the failure of one component means a loss of information. Suppose that we built AI programs that were not sensitive to the failure of a few components, perhaps by using redundancy and distributing information across a wide range of components? This would open the possibility of very large-scale implementations. With current technology, it is far easier to build a billion-component integrated circuit in which 95 percent of the components work correctly than it is to build a perfectly functioning million-component machine [8].
Another thing people seem to be able to do better than computers is handle fuzzy situations. We have very large memories of visual, auditory, and problem-solving episodes, and one key operation in solving new problems is finding closest matches to old situations. Inexact matching is something brain-style models seem to be good at, because of the diffuse and fluid way in which knowledge is represented.
The idea behind connectionism, then, is that we may see significant advances in AI if we approach problems from the point of view of brain-style computation rather than rule-based symbol manipulation. At the end of this article, we will look more closely at the relationship between connectionist and symbolic AI.
[1]
C. D. Gelatt,et al.
Optimization by Simulated Annealing
,
1983,
Science.
[2]
Richard Lippmann,et al.
Review of Neural Networks for Speech Recognition
,
1989,
Neural Computation.
[3]
Dana H. Ballard,et al.
Parameter Nets
,
1984,
Artif. Intell..
[4]
Frank Rosenblatt,et al.
PRINCIPLES OF NEURODYNAMICS. PERCEPTRONS AND THE THEORY OF BRAIN MECHANISMS
,
1963
.
[5]
H. D. Block.
The perceptron: a model for brain functioning. I
,
1962
.
[6]
Terrence J. Sejnowski,et al.
A Parallel Network that Learns to Play Backgammon
,
1989,
Artif. Intell..
[7]
Jerome A. Feldman,et al.
Connectionist Models and Their Properties
,
1982,
Cogn. Sci..
[8]
Geoffrey E. Hinton,et al.
Mundane Reasoning by Parallel Constraint Satisfaction
,
1990
.
[9]
Dean Pomerleau,et al.
ALVINN, an autonomous land vehicle in a neural network
,
2015
.
[10]
Geoffrey E. Hinton,et al.
Learning and relearning in Boltzmann machines
,
1986
.
[11]
Terrence J. Sejnowski,et al.
Analysis of hidden units in a layered network trained to classify sonar targets
,
1988,
Neural Networks.
[12]
J J Hopfield,et al.
Neural networks and physical systems with emergent collective computational abilities.
,
1982,
Proceedings of the National Academy of Sciences of the United States of America.
[13]
W. Ashby.
Design for a Brain
,
1954
.
[14]
Michael I. Jordan.
Supervised learning and systems with excess degrees of freedom
,
1988
.
[15]
Geoffrey E. Hinton,et al.
Connectionist Architectures for Artificial Intelligence
,
1990,
Computer.
[16]
Terrence J. Sejnowski,et al.
Parallel Networks that Learn to Pronounce English Text
,
1987,
Complex Syst..
[17]
Geoffrey E. Hinton,et al.
A Distributed Connectionist Production System
,
1988,
Cogn. Sci..