Factorized Computation: What the Neocortex Can Tell Us About the Future of Computing

In ancient Greece our brains were presumed to be mainly important for cooling our bodies. When humanity started to understand that our brains are important for thinking, the way it would be explained was with water pump systems as this was one of the most sophisticated models at the time. In the nineteenth century, when we started to utilize electricity it became apparent that our brains also use electrical signals. Then, in the twentieth century, we defined algorithms, improved electrical engineering and invented the computer. Those inventions prevail as some of the most common comparisons of how our brains might work. When taking a step back and comparing what we know from electrophysiology, anatomy, psychology, and medicine to current computational models of the neocortex, it becomes apparent that our traditional definition of an algorithm and of what it means to “compute” needs to be adjusted to be more applicable to the neocortex. More specifically, the traditional conversion from “input” to “output” is not as well defined when considering brain areas representing different aspects of the same scene. Consider for example reading this paper: while the input is quite clearly visual, it is not obvious what the desired output is besides maybe turning to the next page, but this should not be the goal in itself. Instead, the more interesting aspect is the change of state in different areas of the brain and the corresponding changes in states of neurons. There are many types of models that have the interaction of modules as the central aspect. Among those are:

[1]  D. J. Felleman,et al.  Distributed hierarchical processing in the primate cerebral cortex. , 1991, Cerebral cortex.

[2]  Brendan J. Frey,et al.  Factor graphs and the sum-product algorithm , 2001, IEEE Trans. Inf. Theory.

[3]  Geoffrey E. Hinton,et al.  Learning and relearning in Boltzmann machines , 1986 .

[4]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[5]  Andrew J. Viterbi,et al.  Error bounds for convolutional codes and an asymptotically optimum decoding algorithm , 1967, IEEE Trans. Inf. Theory.

[6]  Ingo Wegener,et al.  The complexity of Boolean functions , 1987 .

[7]  Roman Barták,et al.  Constraint Processing , 2009, Encyclopedia of Artificial Intelligence.

[8]  Matthew Cook,et al.  Learning and Inferring Relations in Cortical Networks , 2016, ArXiv.

[9]  Gregor Schöner,et al.  The Emergence of Stimulus-Response Associations from Neural Activation Fields: Dynamic Field Theory , 2005 .

[10]  H. Bethe Statistical Theory of Superlattices , 1935 .

[11]  Ehud Shapiro,et al.  The family of concurrent logic programming languages , 1989, CSUR.

[12]  Judea Pearl,et al.  Probabilistic reasoning in intelligent systems - networks of plausible inference , 1991, Morgan Kaufmann series in representation and reasoning.

[13]  D. Knill,et al.  The Bayesian brain: the role of uncertainty in neural coding and computation , 2004, Trends in Neurosciences.

[14]  Matthew Cook,et al.  Toward joint approximate inference of visual quantities on cellular processor arrays , 2015, 2015 IEEE International Symposium on Circuits and Systems (ISCAS).