Ten simple rules for the computational modeling of behavioral data
暂无分享,去创建一个
[1] Michael J. Frank,et al. Chunking as a rational strategy for lossy data compression in visual working memory tasks , 2017 .
[2] G. Schwarz. Estimating the Dimension of a Model , 1978 .
[3] Nathaniel D. Daw,et al. Trial-by-trial data analysis using computational models , 2011 .
[4] Jeffrey N. Rouder,et al. Modeling Response Times for Two-Choice Decisions , 1998 .
[5] E. Wagenmakers,et al. Hierarchical Bayesian parameter estimation for cumulative prospect theory , 2011, Journal of Mathematical Psychology.
[6] Matthew R Nassar,et al. Taming the beast: extracting generalizable knowledge from computational models of cognition , 2016, Current Opinion in Behavioral Sciences.
[7] H. Akaike. A new look at the statistical model identification , 1974 .
[8] M. Lee,et al. Modeling individual differences in cognition , 2005, Psychonomic bulletin & review.
[9] Jan Drugowitsch,et al. Computational Precision of Mental Inference as Critical Source of Human Choice Suboptimality , 2016, Neuron.
[10] Anne G E Collins,et al. Cognitive control over learning: creating, clustering, and generalizing task-set structure. , 2013, Psychological review.
[11] Robert Taylor,et al. Resources masquerading as slots: Flexible allocation of visual working memory , 2016, Cognitive Psychology.
[12] Deanna M Barch,et al. Probabilistic Reinforcement Learning in Patients With Schizophrenia: Relationships to Anhedonia and Avolition. , 2016, Biological psychiatry. Cognitive neuroscience and neuroimaging.
[13] Jeffrey N Rouder,et al. Developing Constraint in Bayesian Mixed Models , 2017, Psychological methods.
[14] Markus Ullsperger,et al. Real and Fictive Outcomes Are Processed Differently but Converge on a Common Adaptive Mechanism , 2013, Neuron.
[15] Karl J. Friston,et al. Bayesian model selection for group studies — Revisited , 2014, NeuroImage.
[16] S. Gershman. Empirical priors for reinforcement learning models , 2016 .
[17] Andrew Heathcote,et al. An introduction to good practices in cognitive modeling , 2015 .
[18] Tomas Knapen,et al. Cross-task contributions of fronto-basal ganglia circuitry in response inhibition and conflict-induced slowing , 2017, bioRxiv.
[19] Anthony M. Norcia,et al. Why more is better: Simultaneous modeling of EEG, fMRI, and behavioral data , 2016, NeuroImage.
[20] Ellen B. Roecker,et al. Prediction error and its estimation for subset-selected models , 1991 .
[21] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[22] P. Dayan,et al. A framework for mesencephalic dopamine systems based on predictive Hebbian learning , 1996, The Journal of neuroscience : the official journal of the Society for Neuroscience.
[23] Peter Dayan,et al. Q-learning , 1992, Machine Learning.
[24] R. Ratcliff,et al. The effects of aging on the speed-accuracy compromise: Boundary optimality in the diffusion model. , 2010, Psychology and aging.
[25] Robert C. Wilson,et al. Inferring Relevance in a Changing World , 2012, Front. Hum. Neurosci..
[26] J. Bradshaw,et al. Strategic and non-strategic problem gamblers differ on decision-making under risk and ambiguity. , 2014, Addiction.
[27] Krzysztof J. Gorgolewski,et al. Reward Learning over Weeks Versus Minutes Increases the Neural Representation of Value in the Human Brain , 2018, The Journal of Neuroscience.
[28] V. Wyart,et al. Computational noise in reward-guided learning drives behavioral variability in volatile environments , 2018, Nature Neuroscience.
[29] Samuel M. McClure,et al. Joint modeling of reaction times and choice improves parameter identifiability in reinforcement learning models , 2019, Journal of Neuroscience Methods.
[30] Luigi Acerbi,et al. Variational Bayesian Monte Carlo , 2018, NeurIPS.
[31] Robert C. Wilson,et al. A causal role for right frontopolar cortex in directed, but not random, exploration , 2016, bioRxiv.
[32] Michael J. Frank,et al. By Carrot or by Stick: Cognitive Reinforcement Learning in Parkinsonism , 2004, Science.
[33] Mehdi Khamassi,et al. Modeling choice and reaction time during arbitrary visuomotor learning through the coordination of adaptive working memory and reinforcement learning , 2015, Front. Behav. Neurosci..
[34] Noah D. Goodman,et al. Empirical evidence for resource-rational anchoring and adjustment , 2017, Psychonomic Bulletin & Review.
[35] Robert C. Wilson,et al. Rational regulation of learning dynamics by pupil–linked arousal systems , 2012, Nature Neuroscience.
[36] Jorge Nocedal,et al. A trust region method based on interior point techniques for nonlinear programming , 2000, Math. Program..
[37] Alice Y. Chiang,et al. Working-memory capacity protects model-based learning from stress , 2013, Proceedings of the National Academy of Sciences.
[38] J. Townsend,et al. The Oxford Handbook of Computational and Mathematical Psychology , 2015 .
[39] Tom Heskes,et al. Hierarchical Bayesian inference for concurrent model fitting and comparison for group studies , 2018, bioRxiv.
[40] Joshua T. Abbott,et al. Random walks on semantic networks can resemble optimal foraging. , 2015, Psychological review.
[41] Kentaro Katahira,et al. How hierarchical models improve point estimates of model parameters at the individual level , 2016 .
[42] Anne G E Collins,et al. Working Memory Contributions to Reinforcement Learning Impairments in Schizophrenia , 2014, The Journal of Neuroscience.
[43] Chris R Sims,et al. Efficient coding explains the universal law of generalization in human perception , 2018, Science.
[44] Robert C. Wilson,et al. A causal role for right frontopolar cortex in directed, but not random, exploration , 2016, bioRxiv.
[45] N. Daw,et al. Characterizing a psychiatric symptom dimension related to deficits in goal-directed control , 2016, eLife.
[46] E. Wagenmakers,et al. Cognitive model decomposition of the BART: Assessment and application , 2011 .
[47] Aaron C. Courville,et al. The pigeon as particle filter , 2007, NIPS 2007.
[48] D. Navarro. Between the Devil and the Deep Blue Sea: Tensions Between Scientific Judgement and Statistical Model Selection , 2018, Computational Brain & Behavior.
[49] Chris Donkin,et al. Landscaping analyses of the ROC predictions of discrete-slots and signal-detection models of visual working memory , 2014, Attention, perception & psychophysics.
[50] R. Rescorla,et al. A theory of Pavlovian conditioning : Variations in the effectiveness of reinforcement and nonreinforcement , 1972 .
[51] Michael J Frank,et al. Within- and across-trial dynamics of human EEG reveal cooperative interplay between reinforcement learning and working memory , 2017, Proceedings of the National Academy of Sciences.
[52] Jukka Corander,et al. Approximate Bayesian Computation , 2013, PLoS Comput. Biol..
[53] E. Wagenmakers,et al. AIC model selection using Akaike weights , 2004, Psychonomic bulletin & review.
[54] Etienne Koechlin,et al. Foundations of human reasoning in the prefrontal cortex , 2014, Science.
[55] Jorge J. Moré,et al. Computing a Trust Region Step , 1983 .
[56] P. Dayan,et al. Model-based influences on humans’ choices and striatal prediction errors , 2011, Neuron.
[57] Daeyeol Lee,et al. Feature-based learning improves adaptability without compromising precision , 2017, Nature Communications.
[58] David M. Riefer,et al. Multinomial processing models of source monitoring. , 1990 .
[59] Simon Farrell,et al. Computational Modeling of Cognition and Behavior , 2018 .
[60] W. Geisler,et al. Contributions of ideal observer theory to vision research , 2011, Vision Research.
[61] E. Wagenmakers,et al. Model Comparison and the Principle of Parsimony , 2015 .
[62] Michael J. Frank,et al. Genetic triple dissociation reveals multiple roles for dopamine in reinforcement learning , 2007, Proceedings of the National Academy of Sciences.
[63] Roger Ratcliff,et al. A Theory of Memory Retrieval. , 1978 .
[64] M. Lee,et al. A Bayesian analysis of human decision-making on bandit problems , 2009 .
[65] M. Lee. How cognitive modeling can benefit from hierarchical Bayesian models. , 2011 .
[66] Anne G E Collins,et al. Opponent actor learning (OpAL): modeling interactive effects of striatal dopamine on reinforcement learning and choice incentive. , 2014, Psychological review.
[67] Robert C. Wilson,et al. An Approximately Bayesian Delta-Rule Model Explains the Dynamics of Belief Updating in a Changing Environment , 2010, The Journal of Neuroscience.
[68] Thomas V. Wiecki,et al. Eye tracking and pupillometry are indicators of dissociable latent decision processes. , 2014, Journal of experimental psychology. General.
[69] Q. Huys. Bayesian Approaches to Learning and Decision-Making , 2018 .
[70] Robert C. Wilson,et al. Is Model Fitting Necessary for Model-Based fMRI? , 2015, PLoS Comput. Biol..
[71] Timothy E. J. Behrens,et al. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex , 2013, Proceedings of the National Academy of Sciences.
[72] James L. McClelland,et al. On the control of automatic processes: a parallel distributed processing account of the Stroop effect. , 1990, Psychological review.
[73] Jonathan D. Cohen,et al. The effect of atomoxetine on random and directed exploration in humans , 2017, PloS one.
[74] Kai Li,et al. Computational approaches to fMRI analysis , 2017, Nature Neuroscience.
[75] E. Koechlin,et al. The Importance of Falsification in Computational Cognitive Modeling , 2017, Trends in Cognitive Sciences.
[76] Robert C. Wilson,et al. Charting the Expansion of Strategic Exploratory Behavior During Adolescence , 2017, Journal of experimental psychology. General.
[77] Alexander Etz,et al. Robust Modeling in Cognitive Science , 2019, Computational Brain & Behavior.
[78] Leo Breiman,et al. Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author) , 2001 .
[79] Raymond J. Dolan,et al. Disentangling the Roles of Approach, Activation and Valence in Instrumental and Pavlovian Responding , 2011, PLoS Comput. Biol..
[80] Brandon M. Turner,et al. Approximate Bayesian computation with differential evolution , 2012 .
[81] Aaron C. Courville,et al. The rat as particle filter , 2007, NIPS.
[82] M. Lee,et al. Bayesian Cognitive Modeling: A Practical Course , 2014 .
[83] M. Gutmann,et al. Approximate Bayesian Computation , 2019, Annual Review of Statistics and Its Application.
[84] Stephen B. Broomell,et al. Parameter recovery for decision modeling using choice data. , 2014 .
[85] Joshua I. Gold,et al. A Mixture of Delta-Rules Approximation to Bayesian Inference in Change-Point Problems , 2013, PLoS Comput. Biol..
[86] Luigi Acerbi,et al. Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search , 2017, NIPS.
[87] G. Box. Robustness in the Strategy of Scientific Model Building. , 1979 .
[88] Anne G E Collins,et al. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis , 2012, The European journal of neuroscience.
[89] Yuan Chang Leong,et al. Dynamic Interaction between Reinforcement Learning and Attention in Multidimensional Environments , 2017, Neuron.
[90] J. O'Doherty,et al. Model‐Based fMRI and Its Application to Reward Learning and Decision Making , 2007, Annals of the New York Academy of Sciences.
[91] Birte U. Forstmann,et al. A Bayesian framework for simultaneously modeling neural and behavioral data , 2013, NeuroImage.
[92] Nicole Propst,et al. Classical Conditioning Ii Current Research And Theory , 2016 .
[93] Peter Dayan,et al. Technical Note: Q-Learning , 2004, Machine Learning.
[94] Thomas V. Wiecki,et al. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python , 2013, Front. Neuroinform..
[95] Xiao-Li Meng,et al. POSTERIOR PREDICTIVE ASSESSMENT OF MODEL FITNESS VIA REALIZED DISCREPANCIES , 1996 .
[96] David J. C. MacKay,et al. Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.
[97] K. Doya,et al. Representation of Action-Specific Reward Values in the Striatum , 2005, Science.
[98] E. Wagenmakers,et al. Bayesian hypothesis testing for psychologists: A tutorial on the Savage–Dickey method , 2010, Cognitive Psychology.