is more than half a century ago since Donald Hebb published his classic book on the organization of behavior (Hebb 1949). Though highly readable, it has been cited in the computational/theoretical neuroscience literature much more often than it has been read. Why is this? On p. 62 of the book one finds the now-famous neurophysiological postulate: ''When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased''. This simple and self-explanatory hypothesis to explain synaptic ''learning'' also aroused considerable debate, because the mechanism was local. ''Locality'' is the appealing idea that a synaptic efficacy is determined by the information that is available to pre-and postsynaptic neurons both in space and in time – but nothing else. One then may wonder, of course, where the above ''metabolic change'' might take place. Hebb directly continued by suggesting that ''synaptic knobs develop'', and on p. 65 he states very explicitly: ''I have chosen to assume that the growth of synaptic knobs, with or without neurobiotaxis, is the basis of the change of facilitation from one cell on another, and this is not altogether implausible''. No, it is not. We even perceive the assertion as a commonplace, though in Hebb's time it was a breakthrough. Since its original formulation in English (but none other), the central question implied by Hebb's postulate has been how to implement it. Most of the information that is presented to a neuronal network varies in space and time, and thus requires a common representation of both the spatial and the temporal aspects of the input. As neuronal activity changes, the responding system should be able to measure and, if necessary, store this change. How can it do so? By now we know the Hebb rule works by means of a learning window in the context of spike-timing-dependent synaptic plasticity. Through the learning window a temporal mechanism ''looks'' at the arrival of a presynaptic spike in relation to the postsynaptic firing time. If a spike arrives at, for example, an excitatory synapse not too long before the postsynaptic neuron fires, the synapse strengthens; otherwise, if the spike is ''too late'', the connection is weakened. Locality in space was a hypothesis clearly formulated …
[1]
H. Markram,et al.
Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs
,
1997,
Science.
[2]
D. Feldman,et al.
Timing-Based LTP and LTD at Vertical Inputs to Layer II/III Pyramidal Cells in Rat Barrel Cortex
,
2000,
Neuron.
[3]
Wulfram Gerstner,et al.
A neuronal learning rule for sub-millisecond temporal coding
,
1996,
Nature.
[4]
Li I. Zhang,et al.
A critical window for cooperation and competition among developing retinotectal synapses
,
1998,
Nature.
[5]
W. James.
The principles of psychology
,
1983
.
[6]
G. Bi,et al.
Distributed synaptic modification in neural networks induced by patterned stimulation
,
1999,
Nature.
[7]
D. Debanne,et al.
Long‐term synaptic plasticity between pairs of individual CA3 pyramidal cells in rat hippocampal slice cultures
,
1998,
The Journal of physiology.
[8]
V. Han,et al.
Synaptic plasticity in a cerebellum-like structure depends on temporal order
,
1997,
Nature.
[9]
G. Bi,et al.
Synaptic Modifications in Cultured Hippocampal Neurons: Dependence on Spike Timing, Synaptic Strength, and Postsynaptic Cell Type
,
1998,
The Journal of Neuroscience.
[10]
C. Holmgren,et al.
Coincident Spiking Activity Induces Long-Term Changes in Inhibition of Neocortical Pyramidal Cells
,
2001,
The Journal of Neuroscience.