Hierarchical Bayesian inference for concurrent model fitting and comparison for group studies

Computational modeling plays an important role in modern neuroscience research. Much previous research has relied on statistical methods, separately, to address two problems that are actually interdependent. First, given a particular computational model, Bayesian hierarchical techniques have been used to estimate individual variation in parameters over a population of subjects, leveraging their population-level distributions. Second, candidate models are themselves compared, and individual variation in the expressed model estimated, according to the fits of the models to each subject. The interdependence between these two problems arises because the relevant population for estimating parameters of a model depends on which other subjects express the model. Here, we propose a hierarchical Bayesian inference (HBI) framework for concurrent model comparison, parameter estimation and inference at the population level, combining previous approaches. We show that this framework has important advantages for both parameter estimation and model comparison theoretically and experimentally. The parameters estimated by the HBI show smaller errors compared to other methods. Model comparison by HBI is robust against outliers and is not biased towards overly simplistic models. Furthermore, the fully Bayesian approach of HBI enables researchers to quantify uncertainty in group parameter estimates, for each candidate model separately, and to perform statistical tests on parameters of a population.

[1]  Kai Li,et al.  Computational approaches to fMRI analysis , 2017, Nature Neuroscience.

[2]  Michael I. Jordan,et al.  An Introduction to Variational Methods for Graphical Models , 1999, Machine Learning.

[3]  G. Casella An Introduction to Empirical Bayes Data Analysis , 1985 .

[4]  Jean Daunizeau,et al.  Variational Bayesian modelling of mixed-effects , 2019, ArXiv.

[5]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[6]  Karl J. Friston,et al.  Computational psychiatry , 2012, Trends in Cognitive Sciences.

[7]  Geoffrey E. Hinton,et al.  A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.

[8]  Karl J. Friston,et al.  Computational neuroimaging strategies for single patient predictions , 2017, NeuroImage.

[9]  N. Daw,et al.  Characterizing a psychiatric symptom dimension related to deficits in goal-directed control , 2016, eLife.

[10]  A. Moustafa,et al.  Impulse Control Disorders in Parkinson's Disease Are Associated with Dysfunction in Stimulus Valuation But Not Action Valuation , 2014, The Journal of Neuroscience.

[11]  Karl J. Friston,et al.  Bayesian model selection for group studies , 2009, NeuroImage.

[12]  Karl J. Friston,et al.  Dynamic causal modelling , 2003, NeuroImage.

[13]  Peter Dayan,et al.  Bonsai Trees in Your Head: How the Pavlovian System Sculpts Goal-Directed Choices by Pruning Decision Trees , 2012, PLoS Comput. Biol..

[14]  P. Dayan,et al.  Model-based influences on humans’ choices and striatal prediction errors , 2011, Neuron.

[15]  Thomas H. B. FitzGerald,et al.  Disruption of Dorsolateral Prefrontal Cortex Decreases Model-Based in Favor of Model-free Control in Humans , 2013, Neuron.

[16]  T. Robbins,et al.  Decision Making, Affect, and Learning: Attention and Performance XXIII , 2011 .

[17]  Karl J. Friston,et al.  Ten simple rules for dynamic causal modeling , 2010, NeuroImage.

[18]  Karl J. Friston,et al.  Bayesian model selection for group studies — Revisited , 2014, NeuroImage.

[19]  Karl J. Friston,et al.  Observing the Observer (I): Meta-Bayesian Models of Learning and Decision-Making , 2010, PloS one.

[20]  Nathaniel D. Daw,et al.  Trial-by-trial data analysis using computational models , 2011 .

[21]  Michael I. Jordan Learning in Graphical Models , 1999, NATO ASI Series.

[22]  Payam Piray The Role of Dorsal Striatal D2-Like Receptors in Reversal Learning: A Reinforcement Learning Viewpoint , 2011, The Journal of Neuroscience.

[23]  N. Daw,et al.  Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning , 2016, The Journal of Neuroscience.

[24]  K. Doya,et al.  The computational neurobiology of learning and reward , 2006, Current Opinion in Neurobiology.

[25]  Karl J. Friston,et al.  Computational psychiatry: the brain as a phantastic organ. , 2014, The lancet. Psychiatry.

[26]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .

[27]  Robert Cowell,et al.  Introduction to Inference for Bayesian Networks , 1998, Learning in Graphical Models.

[28]  J. O'Doherty,et al.  Model‐Based fMRI and Its Application to Reward Learning and Decision Making , 2007, Annals of the New York Academy of Sciences.

[29]  Thomas V. Wiecki,et al.  HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python , 2013, Front. Neuroinform..

[30]  J. Kruschke Bayesian estimation supersedes the t test. , 2013, Journal of experimental psychology. General.

[31]  Yu Yao,et al.  Variational Bayesian inversion for hierarchical unsupervised generative embedding (HUGE) , 2018, NeuroImage.

[32]  Sudhir Raman,et al.  A hierarchical model for integrating unsupervised generative embedding and empirical Bayes , 2016, Journal of Neuroscience Methods.

[33]  Michael J. Frank,et al.  Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning , 2007, Proceedings of the National Academy of Sciences.

[34]  H. Robbins An Empirical Bayes Approach to Statistics , 1956 .

[35]  Alice Y. Chiang,et al.  Working-memory capacity protects model-based learning from stress , 2013, Proceedings of the National Academy of Sciences.

[36]  Petra Himmel,et al.  Stevens Handbook Of Experimental Psychology Learning Motivation And Emotion , 2016 .

[37]  Lionel Rigoux,et al.  VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data , 2014, PLoS Comput. Biol..

[38]  M. Frank,et al.  From reinforcement learning models to psychiatric and neurological disorders , 2011, Nature Neuroscience.

[39]  M. Frank,et al.  Computational psychiatry as a bridge from neuroscience to clinical applications , 2016, Nature Neuroscience.

[40]  Michael J. Frank,et al.  By Carrot or by Stick: Cognitive Reinforcement Learning in Parkinsonism , 2004, Science.

[41]  M. Gluck,et al.  Dopaminergic Drugs Modulate Learning Rates and Perseveration in Parkinson's Patients in a Dynamic Foraging Task , 2009, The Journal of Neuroscience.

[42]  R. Cools,et al.  Human Choice Strategy Varies with Anatomical Projections from Ventromedial Prefrontal Cortex to Medial Striatum. , 2016, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[43]  Diane M. Griffiths,et al.  THE REGENTS OF THE UNIVERSITY OF CALIFORNIA , 2007 .

[44]  Amir Dezfouli,et al.  Speed/Accuracy Trade-Off between the Habitual and the Goal-Directed Processes , 2011, PLoS Comput. Biol..

[45]  R. Cools,et al.  Emotionally Aversive Cues Suppress Neural Systems Underlying Optimal Learning in Socially Anxious Individuals , 2018, The Journal of Neuroscience.

[46]  Karl J. Friston,et al.  Variational free energy and the Laplace approximation , 2007, NeuroImage.

[47]  P. Dayan,et al.  Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control , 2005, Nature Neuroscience.

[48]  Raymond J. Dolan,et al.  Disentangling the Roles of Approach, Activation and Valence in Instrumental and Pavlovian Responding , 2011, PLoS Comput. Biol..