Brain experiments imply adaptation mechanisms which outperform common AI learning algorithms

Attempting to imitate the brain’s functionalities, researchers have bridged between neuroscience and artificial intelligence for decades; however, experimental neuroscience has not directly advanced the field of machine learning (ML). Here, using neuronal cultures, we demonstrate that increased training frequency accelerates the neuronal adaptation processes. This mechanism was implemented on artificial neural networks, where a local learning step-size increases for coherent consecutive learning steps, and tested on a simple dataset of handwritten digits, MNIST. Based on our on-line learning results with a few handwriting examples, success rates for brain-inspired algorithms substantially outperform the commonly used ML algorithms. We speculate this emerging bridge from slow brain function to ML will promote ultrafast decision making under limited examples, which is the reality in many aspects of human activity, robotic control, and network optimization.

[1]  Ido Kanter,et al.  New Types of Experiments Reveal that a Neuron Functions as Multiple Independent Threshold Units , 2017, Scientific Reports.

[2]  Ying Zhang,et al.  A strategy to apply machine learning to small datasets in materials science , 2018, npj Computational Materials.

[3]  W. Gerstner,et al.  Spike-Timing-Dependent Plasticity: A Comprehensive Overview , 2012, Front. Syn. Neurosci..

[4]  Wulfram Gerstner,et al.  Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. , 2005, Journal of neurophysiology.

[5]  S. Nelson,et al.  Homeostatic plasticity in the developing nervous system , 2004, Nature Reviews Neuroscience.

[6]  Christopher C. Cline,et al.  Noninvasive neuroimaging enhances continuous neural tracking for robotic device control , 2019, Science Robotics.

[7]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[8]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[9]  E. Capaldi,et al.  The organization of behavior. , 1992, Journal of applied behavior analysis.

[10]  Taghi M. Khoshgoftaar,et al.  A survey on Image Data Augmentation for Deep Learning , 2019, Journal of Big Data.

[11]  T. Watkin,et al.  THE STATISTICAL-MECHANICS OF LEARNING A RULE , 1993 .

[12]  Shruti Mishra,et al.  Machine learning in a data-limited regime: Augmenting experiments with synthetic data uncovers order in crumpled sheets , 2018, Science Advances.

[13]  Dimitris A. Karras,et al.  An efficient constrained learning algorithm with momentum acceleration , 1995, Neural Networks.

[14]  Harris Drucker,et al.  Learning algorithms for classification: A comparison on handwritten digit recognition , 1995 .

[15]  Y. Dan,et al.  Hebbian depression of isolated neuromuscular synapses in vitro. , 1992, Science.

[16]  Ido Kanter,et al.  Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links , 2018, Scientific Reports.

[17]  M. Kearns,et al.  An Experimental Study of the Coloring Problem on Human Subject Networks , 2006, Science.

[18]  Ido Kanter,et al.  Biological learning curves outperform existing ones in artificial intelligence algorithms , 2019, Scientific Reports.

[19]  Roland Bouffanais,et al.  Optimal network topology for responsive collective behavior , 2018, Science Advances.

[20]  G. N. Saridis,et al.  Intelligent robotic control , 1983 .