Integrating convolutional neural networks into a sparse distributed representation model based on mammalian cortical learning

Biological brains exhibit a remarkable capacity to recognise real-world patterns effectively. Despite major advances in neuroscience over the last few decades, an understanding of the brain's underlying mechanisms for pattern recognition remains unattained. Efforts to replicate such high-level brain functions on the basis of the limited, low-level known details of the brain have naturally led to critical assumptions that make brain-inspired machine learning possible. Convolutional neural networks are an example of such architectures, shown to produce state-of-the-art classification performance on practical applications. The Hierarchical Temporal Memory (HTM) model, on the other hand, performs pattern and sequence recognition on the basis of highly biologically plausible structure and operation. In this work we build on the strengths of convolutional neural networks by integrating them into the HTM framework. An analysis of the common and complementary features between the two models results in the proposal of an innovative, hybrid machine learning architecture. Practical tests on a handwritten digit recognition task reveal a 2% fall in pattern recognition performance, compared to that of the original convolutional neural network design. Nevertheless, key HTM features embedded in the novel architecture enable its potential future enhancement with sequence learning and prediction, an inexistent capability in traditional convolutional neural networks.