Why think step-by-step? Reasoning emerges from the locality of experience
暂无分享,去创建一个
[1] S. Gu,et al. Large Language Models are Zero-Shot Reasoners , 2022, NeurIPS.
[2] Andrew Kyle Lampinen,et al. Data Distributional Properties Drive Emergent In-Context Learning in Transformers , 2022, NeurIPS.
[3] James L. McClelland,et al. Can language models learn from explanations in context? , 2022, EMNLP.
[4] Noah D. Goodman,et al. STaR: Bootstrapping Reasoning With Reasoning , 2022, 2203.14465.
[5] Dale Schuurmans,et al. Chain of Thought Prompting Elicits Reasoning in Large Language Models , 2022, NeurIPS.
[6] David Bieber,et al. Show Your Work: Scratchpads for Intermediate Computation with Language Models , 2021, ArXiv.
[7] Mohammad Bavarian,et al. Training Verifiers to Solve Math Word Problems , 2021, ArXiv.
[8] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[9] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[10] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[11] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[12] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[13] F. R. Rosendaal,et al. Prediction , 2015, Journal of thrombosis and haemostasis : JTH.
[14] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[15] Daniel C. Dennett,et al. Intuition Pumps and Other Tools for Thinking , 2013 .
[16] Ricarda I. Schubotz,et al. Prediction, Cognition and the Brain , 2009, Front. Hum. Neurosci..
[17] Roger N. Shepard,et al. The Step to Rationality: The Efficacy of Thought Experiments in Science, Ethics, and Free Will , 2008, Cogn. Sci..
[18] John D. Lafferty,et al. A correlated topic model of Science , 2007, 0708.3601.
[19] J. Tenenbaum,et al. Optimal Predictions in Everyday Cognition , 2006, Psychological science.
[20] Michael I. Jordan,et al. Latent Dirichlet Allocation , 2001, J. Mach. Learn. Res..