Fodor and Lepore (1992) recognize the state-space kinematics of neural networks for what it most assuredly is: a natural home for holistic accounts of meaning and of cognitive significance generally. Precisely what form such accounts should take is still to be worked out, but Fodor and Lepore (hereafter, "F&L") see some early possibilities well enough to try for a preemptive attack on the entire approach. My aim here is to show that the statespace approach is both more resilient and more resourceful than their critique would suggest. A typical neural network (see Fig. 1) consists in a population of input or "sensory" neurons {II,..., In} which project their axons forward to one or more populations of hidden or "processing" neurons {H1,..., Hm}, and {G1,..., Gj } which project their axons forward to a final population of output or "motor" neurons {O0,..., Ok). The network's occurrent representations consist in the several activation patterns across each of these distinct neuronal populations. For example, the network's input representation at any given moment will consist in some ordered set of activation levels, across the input units {Ii,..., In}. It is this particular pattern or vector, qua unique combination of values along each of the n axes, that carries the relevant information, that has the relevant "semantic content." A parallel point holds for each of the network's representations at each of the successive neuronal layers. The point of the sequence of distinct layers is to permit the transformation of input representations into a sequence of subsequent representations, and ultimately into an output vector that drives a motor response of some kind. This transformational task is carried out at each stage by the configuration of synaptic "weights" that connect each layer of neurons to the next layer up.
[1]
Ernest Lepore,et al.
Holism: A Shopper's Guide
,
1992
.
[2]
Terrence J. Sejnowski,et al.
Network model of shape-from-shading: neural function arises from both receptive and projective fields
,
1988,
Nature.
[3]
Terrence J. Sejnowski,et al.
Analysis of hidden units in a layered network trained to classify sonar targets
,
1988,
Neural Networks.
[4]
Terrence J. Sejnowski,et al.
Parallel Networks that Learn to Pronounce English Text
,
1987,
Complex Syst..
[5]
P. Churchland.
Scientific realism and the plasticity of mind
,
1980
.