Measuring Shared Information and Coordinated Activity in Neuronal Networks

Most nervous systems encode information about stimuli in the responding activity of large neuronal networks. This activity often manifests itself as dynamically coordinated sequences of action potentials. Since multiple electrode recordings are now a standard tool in neuroscience research, it is important to have a measure of such network-wide behavioral coordination and information sharing, applicable to multiple neural spike train data. We propose a new statistic, informational coherence, which measures how much better one unit can be predicted by knowing the dynamical state of another. We argue informational coherence is a measure of association and shared information which is superior to traditional pairwise measures of synchronization and correlation. To find the dynamical states, we use a recently-introduced algorithm which reconstructs effective state spaces from stochastic time series. We then extend the pairwise measure to a multivariate analysis of the network by estimating the network multi-information. We illustrate our method by testing it on a detailed model of the transition from gamma to beta rhythms.

[1]  H. Kantz,et al.  Nonlinear time series analysis , 1997 .

[2]  Tj Sejnowski,et al.  Neural Codes and Distributed Representations , 1999 .

[3]  Shun-ichi Amari,et al.  Information geometry on hierarchy of probability distributions , 2001, IEEE Trans. Inf. Theory.

[4]  M. V. Rossum,et al.  In Neural Computation , 2022 .

[5]  Dana Ron,et al.  The power of amnesia: Learning probabilistic automata with variable memory length , 1996, Machine Learning.

[6]  Herbert Jaeger,et al.  Observable Operator Models for Discrete Stochastic Time Series , 2000, Neural Computation.

[7]  U. Chatterjee,et al.  Effect of unconventional feeds on production cost, growth performance and expression of quantitative genes in growing pigs , 2022, Journal of the Indonesian Tropical Animal Agriculture.

[8]  October I Physical Review Letters , 2022 .

[9]  D. Long Probabilistic Models of the Brain. , 2002 .

[10]  Terrence J. Sejnowski,et al.  Neural codes and distributed representations: foundations of neural computation , 1999 .

[11]  C. N. Liu,et al.  Approximating discrete probability distributions with dependence trees , 1968, IEEE Trans. Inf. Theory.

[12]  R. K. Simpson Nature Neuroscience , 2022 .

[13]  N. U. Ahmed,et al.  Linear and Nonlinear Filtering for Scientists and Engineers , 1999 .

[14]  A. Tate Statistical dynamics , 1970 .

[15]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[16]  Joseph Y. Halpern,et al.  Uncertainty in artificial intelligence : proceedings of the Twentieth Conference (2004) : July 7-11, 2004, Banff, Canada , 2004 .

[17]  Axthonv G. Oettinger,et al.  IEEE Transactions on Information Theory , 1998 .

[18]  Koby Crammer,et al.  Advances in Neural Information Processing Systems 14 , 2002 .

[19]  R. Lathe Phd by thesis , 1988, Nature.

[20]  Thomas H. Cormen,et al.  Introduction to algorithms [2nd ed.] , 2001 .

[21]  Cosma Rohilla Shalizi,et al.  Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences , 2004, UAI.

[22]  Luisa Turrin Fernholz,et al.  practice / data analysis , 2022 .

[23]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.