Parameter Estimation for Spatio-Temporal Maximum Entropy Distributions: Application to Neural Spike Trains
暂无分享,去创建一个
[1] A. V. Skorohod,et al. The theory of stochastic processes , 1974 .
[2] William Bialek,et al. Entropy and Information in Neural Spike Trains , 1996, cond-mat/9603127.
[3] Gerhard Stock,et al. Maximum caliber inference of nonequilibrium processes. , 2010, The Journal of chemical physics.
[4] Michael J. Berry,et al. Gibbs distribution analysis of temporal correlations structure in retina ganglion cells , 2011, Journal of Physiology - Paris.
[5] F. Papangelou. GIBBS MEASURES AND PHASE TRANSITIONS (de Gruyter Studies in Mathematics 9) , 1990 .
[6] Shun-ichi Amari,et al. Information-Geometric Decomposition in Spike Analysis , 2001, NIPS.
[7] Konrad P Kording,et al. How advances in neural recording affect data analysis , 2011, Nature Neuroscience.
[8] Hans-Otto Georgii,et al. Gibbs Measures and Phase Transitions , 1988 .
[9] Bruno Cessac,et al. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method , 2012, 1209.3886.
[10] Hilbert J. Kappen,et al. Boltzmann Machine Learning Using Mean Field Theory and Linear Response Correction , 1997, NIPS.
[11] Eero P. Simoncelli,et al. Spatio-temporal correlations and visual signalling in a complete neuronal population , 2008, Nature.
[12] Daniel N Hill,et al. Quality Metrics to Accompany Spike Sorting of Extracellular Signals , 2011, The Journal of Neuroscience.
[13] E J Chichilnisky,et al. A simple white noise analysis of neuronal light responses , 2001, Network.
[14] R. Segev,et al. The Architecture of Functional Interaction Networks in the Retina , 2011, The Journal of Neuroscience.
[15] R. Segev,et al. Sparse low-order interaction network underlies a highly correlated and learnable neural population code , 2011, Proceedings of the National Academy of Sciences.
[16] Michael J. Berry,et al. Mapping a Complete Neural Population in the Retina , 2012, The Journal of Neuroscience.
[17] Ronald Rosenfeld,et al. Adaptive Statistical Language Modeling; A Maximum Entropy Approach , 1994 .
[18] E. T. Jaynes,et al. Where do we Stand on Maximum Entropy , 1979 .
[19] Imre Csiszár,et al. On the computation of rate-distortion functions (Corresp.) , 1974, IEEE Trans. Inf. Theory.
[20] J. Besag. Spatial Interaction and the Statistical Analysis of Lattice Systems , 1974 .
[21] Shun-ichi Amari,et al. Information geometry on hierarchy of probability distributions , 2001, IEEE Trans. Inf. Theory.
[22] Adam L. Berger,et al. A Maximum Entropy Approach to Natural Language Processing , 1996, CL.
[23] A.M. Litke,et al. What does the eye tell the brain?: Development of a system for the large scale recording of retinal output activity , 2003, 2003 IEEE Nuclear Science Symposium. Conference Record (IEEE Cat. No.03CH37515).
[24] G. Keller. Equilibrium States in Ergodic Theory , 1998 .
[25] G. Keller,et al. Pressure and Equilibrium States in Ergodic Theory , 2008, Encyclopedia of Complexity and Systems Science.
[26] D. Ruelle. Statistical Mechanics: Rigorous Results , 1999 .
[27] A. Maccione,et al. Large-scale, high-resolution electrophysiological imaging of field potentials in brain slices with microelectronic multielectrode arrays , 2012, Front. Neural Circuits.
[28] Robert E. Schapire,et al. Faster solutions of the inverse pairwise Ising problem , 2008 .
[29] Michael J. Berry,et al. The simplest maximum entropy model for collective behavior in a neural network , 2012, 1207.6319.
[30] Lide Wu,et al. A Fast Algorithm for Feature Selection in Conditional Maximum Entropy Modeling , 2003, EMNLP.
[31] Michael J. Berry,et al. Spin glass models for a network of real neurons , 2009, 0912.5409.
[32] E. Jaynes. Information Theory and Statistical Mechanics , 1957 .
[33] B. Cessac,et al. Estimating maximum entropy distributions from periodic orbits in spike trains , 2013 .
[34] Ronald Rosenfeld,et al. Efficient sampling and feature selection in whole sentence maximum entropy language models , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).
[35] J. Donoghue,et al. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes , 2009, Nature Neuroscience.
[36] R. Bowen. Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms , 1975 .
[37] DANNY CALEGARI,et al. Thermodynamic Formalism , 2021, Lecture Notes in Mathematics.
[38] R. Quian Quiroga,et al. Unsupervised Spike Detection and Sorting with Wavelets and Superparamagnetic Clustering , 2004, Neural Computation.
[39] Sami El Boustani,et al. Prediction of spatiotemporal patterns of neural activity from pairwise correlations. , 2009, Physical review letters.
[40] Rob Koeling. Chunking with Maximum Entropy Models , 2000, CoNLL/LLL.
[41] Thermodynamical Formalism and Multifractal Analysis for Meromorphic Functions of Finite Order , 2007, math/0701275.
[42] Michael J. Berry,et al. Weak pairwise correlations imply strongly correlated network states in a neural population , 2005, Nature.
[43] E. Jaynes. The Minimum Entropy Production Principle , 1980 .
[44] Simon R. Schultz,et al. The Ising decoder: reading out the activity of large neural ensembles , 2010, Journal of Computational Neuroscience.
[45] Roberto Fernández,et al. Chains with Complete Connections: General Theory, Uniqueness, Loss of Memory and Mixing Properties , 2003, math/0305026.
[46] David R. Cox,et al. The Theory of Stochastic Processes , 1967, The Mathematical Gazette.
[47] Miroslav Dudík,et al. Performance Guarantees for Regularized Maximum Entropy Density Estimation , 2004, COLT.
[48] John M. Beggs,et al. A Maximum Entropy Model Applied to Spatial and Temporal Correlations from Cortical Networks In Vitro , 2008, The Journal of Neuroscience.
[49] E. T. Jaynes,et al. Macroscopic Prediction , 1996 .
[50] Xiaoli Li,et al. Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy , 2013, PloS one.
[51] Yoram Singer,et al. Logistic Regression, AdaBoost and Bregman Distances , 2000, Machine Learning.