High and low dimensionality in neuroscience and artificial intelligence: Comment on "The unreasonable effectiveness of small neural ensembles in high-dimensional brain" by A.N. Gorban et al.

Brains are undoubtedly high dimensional as they are composed of hundreds of neurons in the most simple animals all the way up to the nearly 100 billion neurons in the human brain [1]. Furthermore, information processing in the nervous system takes place at multiple scales: from the molecular level in subcellular and synaptic processes, to the network level in small, large and system circuits, including also the realm of brain to brain communication. Thus, many informational interactions involve the simultaneous joint action of distinct temporal and spatial scales, which adds to or builds on the complexity derived from such high dimensionality. In their review [2], Gorban et al. find a balance between the curse and the blessing of dimensionality and gather a set of theoretical results of interest to both machine learning and neuroscience research communities. I would like to emphasize the importance of a key issue discussed by Gorban et al. Both in the brain and in machine learning paradigms, re-training large ensembles of neurons is extremely time and energy consuming, in fact impossible to realize in many real-life situations/applications. Thus, the existence of high discriminative units and a hierarchical organization for error correction are fundamental for effective information encoding, processing and execution, also relevant for fast learning and to optimize memory capacity. The results on concentration of measure and stochastic separation discussed by Gorban et al. come timely and handy for joint efforts in this direction by groups with diverse backgrounds. This paper is also noteworthy to remember that computational neuroscience and artificial intelligence have common multidisciplinary roots. In 1943, Warren S. McCulloch, a neurophysiologist, and Walter H. Pitts, a self-made logician created the first artificial neural network from the knowledge of neuroscience research at that time [3]. Just a few years later, Donald O. Hebb, a psychologist, suggested what is considered the first learning rule for neuron ensembles [4]. This learning rule was mathematically formalized and successfully applied to a wide variety of artificial neural network paradigms that emerged from the early work of McCulloch and Pitts. However, after the initial bio-inspired beginnings, there was a long period in which artificial intelligence walked apart from the progress in neuroscience research, and vice versa. Only a few theoretical paradigms kept some bidirectional communication between machine learning and neuroscience progress, see [5], with isolated efforts that related machine learning understanding with wet