If we compute faster, do we understand better?

Practitioners of cognitive science, “theoretical” neuroscience, and psychology have made less use of high-performance computing for testing theories than have those in many other areas of science. Why is this? In high-performance scientific computation, potentially billions of operations must lead to a trustable conclusion. Technical problems with the stability of algorithms aside, this requirement also places extremely rigorous constraints on the accuracy of the underlying theory. For example, electromagnetic interactions seem to hold accurately from atomic to galactic scales. Large-scale computations using elementary principles are possible and useful. Many have commented that the behavioral and neural sciences are largely pretheoretical. One consequence is that we cannot trust our few theories to scale well for a very good reason: They don’t. We have some quite good computational theories for single neurons and some large-scale aspects of behavior seem to be surprisingly lawful. However, we have little idea about how to go from the behavior of a single neuron to the behavior of the 1011 neurons involved when the brain actually does something. Neural networks have offered one potential way to leap this enormous gap in scale, since many elementary units cooperate in a neural network computation. As currently formulated, however, neural networks seem to lack essential mechanisms that are required for flexible control of the computation, and they also neglect structure at intermediate scales of organization. We will present some speculations related to controllability and scaling in neural networks.