Brains are undoubtedly high dimensional as they are composed of hundreds of neurons in the most simple animals all the way up to the nearly 100 billion neurons in the human brain [1]. Furthermore, information processing in the nervous system takes place at multiple scales: from the molecular level in subcellular and synaptic processes, to the network level in small, large and system circuits, including also the realm of brain to brain communication. Thus, many informational interactions involve the simultaneous joint action of distinct temporal and spatial scales, which adds to or builds on the complexity derived from such high dimensionality. In their review [2], Gorban et al. find a balance between the curse and the blessing of dimensionality and gather a set of theoretical results of interest to both machine learning and neuroscience research communities. I would like to emphasize the importance of a key issue discussed by Gorban et al. Both in the brain and in machine learning paradigms, re-training large ensembles of neurons is extremely time and energy consuming, in fact impossible to realize in many real-life situations/applications. Thus, the existence of high discriminative units and a hierarchical organization for error correction are fundamental for effective information encoding, processing and execution, also relevant for fast learning and to optimize memory capacity. The results on concentration of measure and stochastic separation discussed by Gorban et al. come timely and handy for joint efforts in this direction by groups with diverse backgrounds. This paper is also noteworthy to remember that computational neuroscience and artificial intelligence have common multidisciplinary roots. In 1943, Warren S. McCulloch, a neurophysiologist, and Walter H. Pitts, a self-made logician created the first artificial neural network from the knowledge of neuroscience research at that time [3]. Just a few years later, Donald O. Hebb, a psychologist, suggested what is considered the first learning rule for neuron ensembles [4]. This learning rule was mathematically formalized and successfully applied to a wide variety of artificial neural network paradigms that emerged from the early work of McCulloch and Pitts. However, after the initial bio-inspired beginnings, there was a long period in which artificial intelligence walked apart from the progress in neuroscience research, and vice versa. Only a few theoretical paradigms kept some bidirectional communication between machine learning and neuroscience progress, see [5], with isolated efforts that related machine learning understanding with wet
[1]
W S McCulloch,et al.
A logical calculus of the ideas immanent in nervous activity
,
1990,
The Philosophy of Artificial Intelligence.
[2]
Konrad P. Kording,et al.
Towards an integration of deep learning and neuroscience
,
2016,
bioRxiv.
[3]
S. Herculano‐Houzel.
The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost
,
2012,
Proceedings of the National Academy of Sciences.
[4]
Ramón Huerta,et al.
Learning Classification in the Olfactory System of Insects
,
2004,
Neural Computation.
[5]
Christof Koch,et al.
Systematic generation of biophysically detailed models for diverse cortical neuron types
,
2018,
Nature Communications.
[6]
E. Capaldi,et al.
The organization of behavior.
,
1992,
Journal of applied behavior analysis.
[7]
Ivan Tyukin,et al.
The unreasonable effectiveness of small neural ensembles in high-dimensional brain
,
2018,
Physics of life reviews.
[8]
Pablo Varona,et al.
Discrete Sequential Information Coding: Heteroclinic Cognitive Dynamics
,
2018,
Front. Comput. Neurosci..
[9]
Stephen Grossberg,et al.
Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world
,
2013,
Neural Networks.
[10]
Dileep George,et al.
Towards a Mathematical Theory of Cortical Micro-circuits
,
2009,
PLoS Comput. Biol..
[11]
Konrad P. Körding,et al.
Toward an Integration of Deep Learning and Neuroscience
,
2016,
bioRxiv.