Recommendations for Bayesian hierarchical model specifications for case-control studies in mental health

Hierarchical model fitting has become commonplace for case-control studies of cognition and behaviour in mental health. However, these techniques require us to formalise assumptions about the data-generating process at the group level, which may not be known. Specifically, researchers typically must choose whether to assume all subjects are drawn from a common population, or to model them as deriving from separate populations. These assumptions have profound implications for computational psychiatry, as they affect the resulting inference (latent parameter recovery) and may conflate or mask true group-level differences. To test these assumptions we ran systematic simulations on synthetic multi-group behavioural data from a commonly used multi-armed bandit task (reinforcement learning task). We then examined recovery of group differences in latent parameter space under the two commonly used generative modelling assumptions: (1) modelling groups under a common shared group-level prior (assuming all participants are generated from a common distribution, and are likely to share common characteristics); (2) modelling separate groups based on symptomatology or diagnostic labels, resulting in separate group-level priors. We evaluated the robustness of these approaches to variations in data quality and prior specifications on a variety of metrics. We found that fitting groups separately (assumptions 2), provided the most accurate and robust inference across all conditions. Our results suggest that when dealing with data from multiple clinical groups, researchers should analyse patient and control groups separately as it provides the most accurate and robust recovery of the parameters of interest.

[1]  John K. Kruschke,et al.  Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan , 2014 .

[2]  Lei Zhang,et al.  Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package , 2016, bioRxiv.

[3]  P. Dayan,et al.  Modeling Avoidance in Mood and Anxiety Disorders Using Reinforcement Learning , 2017, Biological Psychiatry.

[4]  Samuel J. Gershman,et al.  A Tutorial on Bayesian Nonparametric Models , 2011, 1106.2697.

[5]  P. Dayan,et al.  Serotonin Selectively Modulates Reward Value in Human Decision-Making , 2012, The Journal of Neuroscience.

[6]  T. Robbins,et al.  Decision Making, Affect, and Learning: Attention and Performance XXIII , 2011 .

[7]  Nathaniel D. Daw,et al.  Trial-by-trial data analysis using computational models , 2011 .

[8]  Jiazhou Chen,et al.  Improving the Reliability of Computational Analyses: Model-Based Planning and Its Relationship With Compulsivity. , 2020, Biological psychiatry. Cognitive neuroscience and neuroimaging.

[9]  David B. Dunson,et al.  Bayesian data analysis, third edition , 2013 .

[10]  M. Frank,et al.  From reinforcement learning models to psychiatric and neurological disorders , 2011, Nature Neuroscience.

[11]  M. Frank,et al.  Computational psychiatry as a bridge from neuroscience to clinical applications , 2016, Nature Neuroscience.

[12]  D. Sudakin,et al.  Appendix A , 2007, Journal of agromedicine.

[13]  P. Dayan,et al.  Decision theory, reinforcement learning, and the brain , 2008, Cognitive, affective & behavioral neuroscience.

[14]  Rebecca L. Bond,et al.  Altered learning under uncertainty in unmedicated mood and anxiety disorders , 2019, Nature Human Behaviour.

[15]  E. Wagenmakers,et al.  Using Bayesian regression to test hypotheses about relationships between parameters and covariates in cognitive models , 2017, Behavior research methods.

[16]  Michael Moutoussis,et al.  Hypotheses About the Relationship of Cognition With Psychopathology Should be Tested by Embedding Them Into Empirical Priors , 2018, Front. Psychol..