Estimation and learning of Dynamic Nonlinear Networks (DyNNets)

Learning high-dimensional systems from data is often computationally challenging in the presence of nonlinearities and dynamics. This paper proposes a novel approach for identification of high-dimensional systems based on decomposing systems into networks of low-dimensional linear dynamical subsystems with memoryless, scalar nonlinear feedback elements and memoryless, linear interactions. The proposed model, called Dynamic Nonlinear Networks (DyNNets), can encompass a wide range of complex phenomena and is particularly well-suited for modeling neuronal systems. It is shown that the posterior density of the hidden states given the unknown parameters of a DyNNet admits a factorable structure that separates the linear dynamics, memoryless nonlinearities, and linear interactions. This factorization enables efficient implementation of maximum a posteriori (MAP) state estimation and system identification via the alternating direction method of multipliers (ADMM). The methodology is illustrated on estimation of neural mass models.

[1]  Sundeep Rangan,et al.  Expectation consistent approximate inference: Generalizations and convergence , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[2]  Stephen P. Boyd,et al.  Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers , 2011, Found. Trends Mach. Learn..

[3]  Sundeep Rangan,et al.  An LFT Approach to Parameter Estimation , 1997 .

[4]  Bruno A Olshausen,et al.  Sparse coding of sensory inputs , 2004, Current Opinion in Neurobiology.

[5]  Volkan Cevher,et al.  Fixed Points of Generalized Approximate Message Passing With Arbitrary Matrices , 2016, IEEE Transactions on Information Theory.

[6]  Sundeep Rangan,et al.  Neural Reconstruction with Approximate Message Passing (NeuRAMP) , 2011, NIPS.

[7]  John P. Cunningham,et al.  Empirical models of spiking in neural populations , 2011, NIPS.

[8]  S. Haykin Kalman Filtering and Neural Networks , 2001 .

[9]  Karl J. Friston,et al.  a K.E. Stephan, a R.B. Reilly, , 2007 .

[10]  Michael Unser,et al.  Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning , 2012, IEEE Transactions on Information Theory.

[11]  R. Veltz Nonlinear analysis methods in neural field models , 2011 .

[12]  Dianhui Wang,et al.  A decentralized training algorithm for Echo State Networks in distributed big data applications , 2016, Neural Networks.

[13]  Sundeep Rangan,et al.  Scalable Inference for Neuronal Connectivity from Calcium Imaging , 2014, NIPS.

[14]  Yann LeCun,et al.  Deep learning with Elastic Averaging SGD , 2014, NIPS.

[15]  John P. Cunningham,et al.  Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity , 2008, NIPS.

[16]  Aaron Q. Li,et al.  Parameter Server for Distributed Machine Learning , 2013 .

[17]  K. Poolla,et al.  An LFT approach to parameter estimation , 1997, Proceedings of the 1997 American Control Conference (Cat. No.97CH36041).

[18]  Jonathan Viventi,et al.  A low-cost, multiplexed electrophysiology system for chronic μECoG recordings in rodents , 2014, 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.