Modeling the influence of working memory, reinforcement, and action uncertainty on reaction time and choice during instrumental learning
暂无分享,去创建一个
[1] Matthew W. Miller,et al. Dissociating the contributions of reward-prediction errors to trial-level adaptation and long-term learning , 2020, Biological Psychology.
[2] Marieke Jepma,et al. A hierarchical Bayesian approach to assess learning and guessing strategies in reinforcement learning , 2019 .
[3] Russell J. Boag,et al. Mutual benefits: Combining reinforcement learning with sequential sampling models , 2019, Neuropsychologia.
[4] A. Pouget,et al. Optimal policy for multi-alternative decisions , 2019, Nature Neuroscience.
[5] J. Rieskamp,et al. A reinforcement learning diffusion decision model for value-based decisions , 2019, Psychonomic Bulletin & Review.
[6] Brandon M. Turner,et al. Cognitive and Neural Bases of Multi-Attribute, Multi-Alternative, Value-based Decisions , 2019, Trends in Cognitive Sciences.
[7] David K. Sewell,et al. Combining error-driven models of associative learning with evidence accumulation models of decision-making , 2019, Psychonomic Bulletin & Review.
[8] Michael Moutoussis,et al. Improving the reliability of model-based decision-making estimates in the two-stage decision task with reaction-times and drift-diffusion modeling , 2019, PLoS Comput. Biol..
[9] Robert C. Wilson,et al. Ten simple rules for the computational modeling of behavioral data , 2019, eLife.
[10] Jordan A Taylor,et al. Dissociable cognitive strategies for sensorimotor learning , 2019, Nature Communications.
[11] Dimitrije Markovic,et al. Deterministic response strategies in a trial-and-error learning task , 2018, PLoS Comput. Biol..
[12] Andrew Heathcote,et al. Dynamic models of choice , 2018, Behavior Research Methods.
[13] Darryl W. Schneider,et al. Hick’s law for choice reaction time: A review , 2018, Quarterly journal of experimental psychology.
[14] Samuel M. McClure,et al. Joint modeling of reaction times and choice improves parameter identifiability in reinforcement learning models , 2018, Journal of Neuroscience Methods.
[15] Timothy D. Hanks,et al. Causal contribution and dynamical encoding in the striatum during evidence accumulation , 2018, bioRxiv.
[16] Anne G. E. Collins,et al. The tortoise and the hare: interactions between reinforcement learning and working memory , 2017, bioRxiv.
[17] Michael J Frank,et al. Within- and across-trial dynamics of human EEG reveal cooperative interplay between reinforcement learning and working memory , 2017, Proceedings of the National Academy of Sciences.
[18] M. Frank,et al. The drift diffusion model as the choice rule in reinforcement learning , 2017, Psychonomic bulletin & review.
[19] M. Frank,et al. Working Memory Load Strengthens Reward Prediction Errors , 2017, The Journal of Neuroscience.
[20] Timothy A. Wifall,et al. The roles of stimulus and response uncertainty in forced-choice performance: an amendment to Hick/Hyman Law , 2016, Psychological research.
[21] M. Frank,et al. Computational psychiatry as a bridge from neuroscience to clinical applications , 2016, Nature Neuroscience.
[22] Jonathan W. Pillow,et al. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making , 2015, Science.
[23] Thomas V. Wiecki,et al. fMRI and EEG Predictors of Dynamic Decision Parameters during Human Reinforcement Learning , 2015, The Journal of Neuroscience.
[24] Elliot A. Ludvig,et al. Humans use directed and random exploration to solve the explore-exploit dilemma. , 2014, Journal of experimental psychology. General.
[25] Anne G E Collins,et al. Working Memory Contributions to Reinforcement Learning Impairments in Schizophrenia , 2014, The Journal of Neuroscience.
[26] Anne G E Collins,et al. Opponent actor learning (OpAL): modeling interactive effects of striatal dopamine on reinforcement learning and choice incentive. , 2014, Psychological review.
[27] Paul M Bays,et al. Working memory retrieval as a decision process. , 2014, Journal of vision.
[28] Anne G E Collins,et al. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis , 2012, The European journal of neuroscience.
[29] T. Braver. The variable nature of cognitive control: a dual mechanisms framework , 2012, Trends in Cognitive Sciences.
[30] Darryl W. Schneider,et al. A memory-based model of Hick’s law , 2011, Cognitive Psychology.
[31] Scott D. Brown,et al. The overconstraint of response time models: Rethinking the scaling problem , 2009, Psychonomic bulletin & review.
[32] Karl J. Friston,et al. Bayesian model selection for group studies , 2009, NeuroImage.
[33] Scott D. Brown,et al. The simplest complete model of choice response time: Linear ballistic accumulation , 2008, Cognitive Psychology.
[34] Roger Ratcliff,et al. The Diffusion Decision Model: Theory and Data for Two-Choice Decision Tasks , 2008, Neural Computation.
[35] Karl J. Friston,et al. Dissociable Roles of Ventral and Dorsal Striatum in Instrumental Conditioning , 2004, Science.
[36] Eytan Ruppin,et al. Actor-critic models of the basal ganglia: new anatomical and computational perspectives , 2002, Neural Networks.
[37] James L. McClelland,et al. The time course of perceptual choice: the leaky, competing accumulator model. , 2001, Psychological review.
[38] Thomas J. Palmeri,et al. An Exemplar-Based Random Walk Model of Speeded Classification , 1997 .
[39] M N Shadlen,et al. Motion perception: seeing and deciding. , 1996, Proceedings of the National Academy of Sciences of the United States of America.
[40] Robert W. Proctor,et al. Repetition effects with categorizable stimulus and response sets. , 1993 .
[41] Roger Ratcliff,et al. A Theory of Memory Retrieval. , 1978 .
[42] G. Schwarz. Estimating the Dimension of a Model , 1978 .
[43] H. Akaike. A new look at the statistical model identification , 1974 .
[44] R. Remington. Analysis of sequential effects in choice reaction times. , 1969, Journal of experimental psychology.
[45] P M Rabbitt,et al. Repetition Effects and Signal Classification Strategies in Serial Choice-Response Tasks , 1968, The Quarterly journal of experimental psychology.
[46] S W Keele,et al. Decay of Visual Information from a Single Letter , 1967, Science.
[47] P. Bertelson,et al. Serial Choice Reaction-time as a Function of Response versus Signal-and-Response Repetition , 1965, Nature.
[48] N. Moray,et al. Imitative Responses and the Rate of Gain of Information , 1961 .
[49] M. V. Rhoades,et al. On the Reduction of Choice Reaction Times with Practice , 1959 .
[50] R. Hyman. Stimulus information as a determinant of reaction time. , 1953, Journal of experimental psychology.
[51] W. E. Hick. Quarterly Journal of Experimental Psychology , 1948, Nature.
[52] C. E. SHANNON,et al. A mathematical theory of communication , 1948, MOCO.
[53] R. Rescorla,et al. A theory of Pavlovian conditioning : Variations in the effectiveness of reinforcement and nonreinforcement , 1972 .
[54] D. Hale. Repetition and probability effects in a serial choice reaction task , 1969 .
[55] Jeffrey N. Rouder,et al. Modeling Response Times for Two-Choice Decisions , 1998 .