methods—such as the backpropa-gation algorithm or self-organizing maps, the standard techniques for generaliza-tion—are notoriously slow, fast pattern-learning techniques are becoming increasingly necessary. Standard artificial neural networks can serve as models of biological memory embodied as strongly connected and lay-ered networks of processing units. These feedback (Hopfield with delta learning) and feedforward (backpropagation) networks learn patterns slowly: the network must adjust weights connecting links between input and output layers (see Figure 1) until it obtains the correct response to the training patterns. But biological learning is not a single process: some forms are very quick and others relatively slow. Short-term biological memory, in particular, works very quickly, so slow neural network models are not plausible candidates in this case. Over the past few years, my colleagues and I have developed new neural network designs that model working memory in their ability to learn and generalize instantaneously. 1–3 These networks are almost as good as backpropagation in the quality of their generalization. 4 With their speed advantage, they will work in many real-time signal-processing, data-compression, forecasting, and pattern-recognition applications. In this report, I describe the networks and their applications to two problems: time-series prediction and an intelligent Web metasearch engine design. My descriptions should indicate how these designs could work in other situations. To provide a context for examining instantaneous learning, let's first consider different types of biological memory. Although described separately, different memory types appear to be fundamentally interrelated. The classification of memory types can take a variety of forms. 5 First, many kinds of sensory memory systems help us perceive the world. For example, visual memory includes components that let a memory trace persist for about one-tenth of a second. This persistence lets us see continuous motion in the discrete frames of a television broadcast. Another component to this memory, more sensitive to shape than brightness, integrates information arriving from the two retinas. Like visual persistence, a memory related to auditory persistence creates an echo that lingers after the item has been spoken. That's why we remember the later words in a series better if we hear them rather than read them. There are memories about facts, events, skills, and habits as well. Some are based on language, others aren't. Fact and event memory is distinct from other kinds of memory, such as the memory forming the basis of skills and habits. Declarative (explicit) memory refers to facts and events. …
[1]
Subhash Kak,et al.
The Three Languages of the Brain: Quantum, Reorganizational, and Associative
,
1996
.
[2]
Subhash Kak,et al.
New algorithms for training feedforward neural networks
,
1994,
Pattern Recognit. Lett..
[3]
Subhash C. Kak,et al.
On Generalization by Neural Networks
,
1998,
Inf. Sci..
[4]
Subhash Kak,et al.
On training feedforward neural networks
,
1993
.
[5]
Praveen Raina.
Comparison of Learning and Generalization Capabilities of the KAK and the Backpropagation Algorithms
,
1994,
Inf. Sci..
[6]
Subhash C. Kak,et al.
A Neural Network-based Intelligent Metasearch Engine
,
1999,
Inf. Sci..
[7]
L. Cosmides.
From : The Cognitive Neurosciences
,
1995
.
[8]
A. Baddeley,et al.
Working memory and executive control.
,
1996,
Philosophical transactions of the Royal Society of London. Series B, Biological sciences.
[9]
Kun-Won Tang,et al.
A new corner classification approach to neural network training
,
1998
.
[10]
Giles,et al.
Searching the world wide Web
,
1998,
Science.
[11]
C. Lee Giles,et al.
Accessibility of information on the web
,
1999,
Nature.