Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception

The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals’ performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the ‘curse of dimensionality’, and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.

[1]  Wolfgang Maass,et al.  Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons , 2011, PLoS Comput. Biol..

[2]  Sebastian Reich,et al.  An ensemble Kalman-Bucy filter for continuous data assimilation , 2012 .

[3]  Andrew M. Clark,et al.  Stimulus onset quenches neural variability: a widespread cortical phenomenon , 2010, Nature Neuroscience.

[4]  Alexandre Pouget,et al.  Optimal Sensorimotor Integration in Recurrent Cortical Networks: A Neural Implementation of Kalman Filters , 2007, The Journal of Neuroscience.

[5]  Arnaud Doucet,et al.  On Particle Methods for Parameter Estimation in State-Space Models , 2014, 1412.8695.

[6]  Olivier Capp'e Online EM Algorithm for Hidden Markov Models , 2009, 0908.2359.

[7]  Leif H. Finkel,et al.  A Neural Implementation of the Kalman Filter , 2009, NIPS.

[8]  José M. F. Moura,et al.  IDENTIFICATION AND FILTERING: OPTIMAL RECURSIVE MAXIMUM LIKELIHOOD APPROACH § , 1986 .

[9]  T. Başar,et al.  A New Approach to Linear Filtering and Prediction Problems , 2001 .

[10]  Rajesh P. N. Rao,et al.  Bayesian Inference and Online Learning in Poisson Neuronal Networks , 2016, Neural Computation.

[11]  Sean P. Meyn,et al.  Multivariable feedback particle filter , 2012, 2012 IEEE 51st IEEE Conference on Decision and Control (CDC).

[12]  Yee Whye Teh,et al.  Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.

[13]  P. Berkes,et al.  Statistically Optimal Perception and Learning: from Behavior to Neural Representations , 2022 .

[14]  Paul Mineiro,et al.  A Monte Carlo EM Approach for Partially Observable Diffusion Processes: Theory and Applications to Neural Networks , 2002, Neural Computation.

[15]  Gianluigi Mongillo,et al.  Online Learning with Hidden Markov Models , 2008, Neural Computation.

[16]  R. E. Kalman,et al.  New Results in Linear Filtering and Prediction Theory , 1961 .

[17]  V B Tadić,et al.  Analyticity, Convergence, and Convergence Rate of Recursive Maximum-Likelihood Estimation in Hidden Markov Models , 2009, IEEE Transactions on Information Theory.

[18]  Konrad Paul Kording,et al.  The dynamics of memory as a consequence of optimal adaptation to a changing body , 2007, Nature Neuroscience.

[19]  Wei Ji Ma,et al.  Bayesian inference with probabilistic population codes , 2006, Nature Neuroscience.

[20]  H. Kushner On the Differential Equations Satisfied by Conditional Probablitity Densities of Markov Processes, with Applications , 1964 .

[21]  Florian Nadel,et al.  Stochastic Processes And Filtering Theory , 2016 .

[22]  David Kappel,et al.  Network Plasticity as Bayesian Inference , 2015, PLoS Comput. Biol..

[23]  David J. C. MacKay,et al.  Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.

[24]  M. Ernst,et al.  Humans integrate visual and haptic information in a statistically optimal fashion , 2002, Nature.

[25]  D. Knill,et al.  The Bayesian brain: the role of uncertainty in neural coding and computation , 2004, Trends in Neurosciences.

[26]  Rajesh P. N. Rao,et al.  Bayesian brain : probabilistic approaches to neural coding , 2006 .

[27]  J. Huang,et al.  Curse of dimensionality and particle filters , 2003, 2003 IEEE Aerospace Conference Proceedings (Cat. No.03TH8652).

[28]  Sacha Sokoloski,et al.  Mathematik in den Naturwissenschaften Leipzig Implementing a Bayes Filter in a Neural Circuit : The Case of Unknown Stimulus Dynamics , 2017 .

[29]  Rajesh P. N. Rao,et al.  Neurons as Monte Carlo Samplers: Bayesian Inference and Learning in Spiking Networks , 2014, NIPS.

[30]  József Fiser,et al.  Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex , 2016, Neuron.

[31]  L. Gerencsér,et al.  Recursive estimation of Hidden Markov Models , 2005, Proceedings of the 44th IEEE Conference on Decision and Control.

[32]  József Fiser,et al.  Spontaneous Cortical Activity Reveals Hallmarks of an Optimal Internal Model of the Environment , 2011, Science.

[33]  Jean-Pascal Pfister,et al.  Online Maximum-Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes , 2016, IEEE Transactions on Automatic Control.

[34]  Sumeetpal S. Singh,et al.  Particle approximations of the score and observed information matrix in state space models with application to parameter estimation , 2011 .

[35]  Michael I. Jordan,et al.  An internal model for sensorimotor integration. , 1995, Science.

[36]  Alexandre Pouget,et al.  Exact Inferences in a Neural Implementation of a Hidden Markov Model , 2007, Neural Computation.

[37]  Sean P. Meyn,et al.  Feedback Particle Filter , 2013, IEEE Transactions on Automatic Control.

[38]  Roland R. Regoes,et al.  Estimating the In Vivo Killing Efficacy of Cytotoxic T Lymphocytes across Different Peptide-MHC Complex Densities , 2015, PLoS Comput. Biol..

[39]  J. Blauert Spatial Hearing: The Psychophysics of Human Sound Localization , 1983 .

[40]  Konrad Paul Kording,et al.  Bayesian integration in sensorimotor learning , 2004, Nature.

[41]  H. Helmholtz Handbuch der physiologischen Optik , 2015 .

[42]  G. Evensen Sequential data assimilation with a nonlinear quasi‐geostrophic model using Monte Carlo methods to forecast error statistics , 1994 .

[43]  Hassana K. Oyibo,et al.  Experience-dependent spatial expectations in mouse visual cortex , 2016, Nature Neuroscience.

[44]  Joseph G. Makin,et al.  Learning to Estimate Dynamical State with Probabilistic Population Codes , 2015, PLoS Comput. Biol..

[45]  Robert A. Legenstein,et al.  Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment , 2014, PLoS Comput. Biol..

[46]  D. Crisan,et al.  Approximate McKean–Vlasov representations for a class of SPDEs , 2005, math/0510668.

[47]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[48]  A. Pouget,et al.  Variance as a Signature of Neural Computations during Decision Making , 2011, Neuron.

[49]  R. Handel,et al.  Can local particle filters beat the curse of dimensionality , 2013, 1301.6585.

[50]  Rajesh P. N. Rao,et al.  Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. , 1999 .

[51]  Jean-Pascal Pfister,et al.  How to Avoid the Curse of Dimensionality: Scalability of Particle Filters with and without Importance Weights , 2017, SIAM Rev..

[52]  Simon J. Godsill,et al.  On sequential Monte Carlo sampling methods for Bayesian filtering , 2000, Stat. Comput..

[53]  D. Knill,et al.  Bayesian sampling in visual perception , 2011, Proceedings of the National Academy of Sciences.

[54]  Guillaume Hennequin,et al.  Fast Sampling-Based Inference in Balanced Neuronal Networks , 2014, NIPS.

[55]  A. Doucet,et al.  A Tutorial on Particle Filtering and Smoothing: Fifteen years later , 2008 .

[56]  A. Beskos,et al.  On the stability of sequential Monte Carlo methods in high dimensions , 2011, 1103.3965.

[57]  J. Pfister,et al.  Online Maximum Likelihood Estimation of the Parameters of Partially Observed Diffusion Processes , 2016 .

[58]  M. Zakai On the optimal filtering of diffusion processes , 1969 .

[59]  D. Crisan,et al.  Fundamentals of Stochastic Filtering , 2008 .