A proposed name for aperiodic brain activity: stochastic chaos

Even casual inspection of time series derived by sampling and recording from the fields of electroencephalographic (EEG) and magnetoencephalographic (MEG) potential generated by active brains reveals continuous widespread oscillations. These waves suggest the overlap of multiple rhythms embedded in broad spectrum noise. In dynamical terms they might be ascribed to limit cycle attractors, because spectral analysis of short segments reveals peaks in the classical frequency ranges of the alpha (8-12 Hz), theta (3-7 Hz), beta (13-30 Hz) and gamma (30-100 Hz) bands of the EEG and MEG. However, autocorrelation functions go rapidly to zero, and the basic form to which spectra converge, as the duration of segments chosen for analysis increases, is a linear decrease in log power with increasing log frequency at a slope near 2 ("1/f 2 "). This form is consistent with Brownian motion and telegraph noise. The unpredictability of brain oscillations suggests that EEGs and MEGs manifest either multiple limit cycle attractors with time variance by continuous modulation, or multiple chaotic attractors with repetitive state transitions, or time-varying colored noise, or all of the above. In all likelihood these fields of potential are epiphenomenal, probably equivalent to the sounds of internal combustion engines at work, or to antique computers in science fiction movies, or to the roars of crowds at football games. In fact, most neuroscientists reject EEG and MEG evidence, in the beliefs that the real work of brains is done by action potentials in neural networks, and that recording wave activity is equivalent to observing an engine with a stethoscope or a computer with a D'Arsonval galvanometer. However, one can learn a lot about a system by listening and watching, if one knows what to seek and find. Numerous recent studies of the behavioral correlates of so-called "unit activity" of single neurons in sensory and motor systems have shown that the carrier of behaviorally significant information is not the pulse train of the single neuron, but instead the organized activity of arrays of neurons (see review in Note 3.7 in Freeman 1995). How many neurons are needed to make an array? Does the number exceed the number that can be accessed by current methods of recording pulse trains (on the order of 100)? Where do they form, what fractions of neurons in local neighborhoods suffice, and how are their outputs selectively read by their targets of transmission? In my view these questions have no answers, because the objects of their inquiry do not exist. Brains work with large masses of neurons having low shared variance, on the order of 0.1%, not with selected small numbers in networks with high covariance. It is the techniques of unit analysis that give a distorted view of brain function. The neural network concept is classically derived from the Golgi studies of cerebral cortical neurons by Lorente de No (1934), who provided the anatomical basis for the concepts of computational neural nets (McCulloch 1969), programmable computers (von Neumann 1958), and nerve cell assemblies (Hebb 1949). The problem is that, when properly used, the Golgi technique stains less than 1% of the neurons in sections of cortex. Moreover, unit recording isolates the pulses generated by local axons of only a small fraction of neurons near the electrode tip, and extracellular recording is seldom designed to observe the dendritic field potentials.