Controllability, Multiplexing, and Transfer Learning in Networks using Evolutionary Learning

Networks are fundamental building blocks for representing data, and computations. Remarkable progress in learning in structurally defined (shallow or deep) networks has recently been achieved. Here we introduce evolutionary exploratory search and learning method of topologically flexible networks under the constraint of producing elementary computational steady-state input-output operations. Our results include; (1) the identification of networks, over four orders of magnitude, implementing computation of steady-state input-output functions, such as a band-pass filter, a threshold function, and an inverse band-pass function. Next, (2) the learned networks are technically controllable as only a small number of driver nodes are required to move the system to a new state. Furthermore, we find that the fraction of required driver nodes is constant during evolutionary learning, suggesting a stable system design. (3), our framework allows multiplexing of different computations using the same network. For example, using a binary representation of the inputs, the network can readily compute three different input-output functions. Finally, (4) the proposed evolutionary learning demonstrates transfer learning. If the system learns one function A, then learning B requires on average less number of steps as compared to learning B from tabula rasa. We conclude that the constrained evolutionary learning produces large robust controllable circuits, capable of multiplexing and transfer learning. Our study suggests that network-based computations of steady-state functions, representing either cellular modules of cell-to-cell communication networks or internal molecular circuits communicating within a cell, could be a powerful model for biologically inspired computing. This complements conceptualizations such as attractor based models, or reservoir computing.

[1]  Alfred O. Hero,et al.  Multi-Functional Sensing for Swarm Robots Using Time Sequence Classification: HoverBot, an Example , 2018, Front. Robot. AI.

[2]  Petros Koumoutsakos,et al.  Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) , 2003, Evolutionary Computation.

[3]  Tamás Vicsek,et al.  Controlling edge dynamics in complex networks , 2011, Nature Physics.

[4]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[5]  Pierre Vandergheynst,et al.  Geometric Deep Learning: Going beyond Euclidean data , 2016, IEEE Signal Process. Mag..

[6]  Yiannis Demiris,et al.  Quality and Diversity Optimization: A Unifying Modular Framework , 2017, IEEE Transactions on Evolutionary Computation.

[7]  Arshdeep Sekhon,et al.  Attend and Predict: Understanding Gene Regulation by Selective Attention on Chromatin , 2017, bioRxiv.

[8]  Albert-László Barabási,et al.  Controllability of complex networks , 2011, Nature.

[9]  Yasuhito Takahashi,et al.  Parallel Hierarchical Matrices with Adaptive Cross Approximation on Symmetric Multiprocessing Clusters , 2014, J. Inf. Process..

[10]  Jesper Tegnér,et al.  Learning Functions in Large Networks requires Modularity and produces Multi-Agent Dynamics , 2018, ArXiv.

[11]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[12]  Christos Davatzikos,et al.  Unsupervised Learning of Functional Network Dynamics in Resting State fMRI , 2013, IPMI.

[13]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[14]  James Sharpe,et al.  An atlas of gene regulatory networks reveals multiple three-gene mechanisms for interpreting morphogen gradients , 2010, Molecular systems biology.

[15]  Goldberg,et al.  Genetic algorithms , 1993, Robust Control Systems with Genetic Algorithms.

[16]  Jean-Baptiste Mouret,et al.  Illuminating search spaces by mapping elites , 2015, ArXiv.

[17]  Geoffrey E. Hinton,et al.  A Learning Algorithm for Boltzmann Machines , 1985, Cogn. Sci..

[18]  Razvan Pascanu,et al.  Relational inductive biases, deep learning, and graph networks , 2018, ArXiv.

[19]  Dan Boneh,et al.  On genetic algorithms , 1995, COLT '95.

[20]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[21]  Vince D. Calhoun,et al.  Dynamic functional network connectivity discriminates mild traumatic brain injury through machine learning , 2018, NeuroImage: Clinical.

[22]  S. C. Kleene,et al.  Introduction to Metamathematics , 1952 .

[23]  David H. Sharp,et al.  Mechanism of eve stripe formation , 1995, Mechanisms of Development.

[24]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[25]  David H. Sharp,et al.  Dynamic control of positional information in the early Drosophila embryo , 2004, Nature.

[26]  Marco Dorigo,et al.  Ant colony optimization for continuous domains , 2008, Eur. J. Oper. Res..

[27]  Maurice Clerc,et al.  Standard Particle Swarm Optimisation , 2012 .

[28]  Uri Alon,et al.  Optimal Regulatory Circuit Topologies for Fold-Change Detection. , 2017, Cell systems.

[29]  J. Vohradský Neural network model of gene expression , 2001, FASEB journal : official publication of the Federation of American Societies for Experimental Biology.

[30]  David H. Sharp,et al.  A connectionist model of development. , 1991, Journal of theoretical biology.

[31]  Benjamin Schrauwen,et al.  Reservoir Computing Trends , 2012, KI - Künstliche Intelligenz.

[32]  Jure Leskovec,et al.  Inductive Representation Learning on Large Graphs , 2017, NIPS.

[33]  Thomas Hiscock Adapting machine-learning algorithms to design gene circuits , 2017 .

[34]  Janet Wiles,et al.  A Gene Network Model for Developing Cell Lineages , 2005, Artificial Life.

[35]  Michael Carbin,et al.  The Lottery Ticket Hypothesis: Training Pruned Neural Networks , 2018, ArXiv.

[36]  S. Kauffman Metabolic stability and epigenesis in randomly constructed genetic nets. , 1969, Journal of theoretical biology.

[37]  Kenneth O. Stanley,et al.  Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning , 2017, ArXiv.

[38]  William Bialek,et al.  Inverse spin glass and related maximum entropy problems. , 2013, Physical review letters.

[39]  Risto Miikkulainen,et al.  Evolving Neural Networks through Augmenting Topologies , 2002, Evolutionary Computation.

[40]  Charles R. Johnson,et al.  Norms for vectors and matrices , 1985 .

[41]  Yasuhito Takahashi,et al.  Software framework for parallel BEM analyses with H-matrices , 2016, 2016 IEEE Conference on Electromagnetic Field Computation (CEFC).

[42]  W. Lim,et al.  Defining Network Topologies that Can Achieve Biochemical Adaptation , 2009, Cell.

[43]  Andre Levchenko,et al.  Dynamic Properties of Network Motifs Contribute to Biological Network Organization , 2005, PLoS biology.

[44]  David Duvenaud,et al.  Neural Ordinary Differential Equations , 2018, NeurIPS.

[45]  Kenneth O. Stanley,et al.  A Hypercube-Based Encoding for Evolving Large-Scale Neural Networks , 2009, Artificial Life.

[46]  Kenneth O. Stanley,et al.  Evolving a diversity of virtual creatures through novelty search and local competition , 2011, GECCO '11.