Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990–2012) and of latest practices (2017–2018) in high-impact journals
暂无分享,去创建一个
[1] Howard Bowman,et al. I Tried a Bunch of Things: The Dangers of Unexpected Overfitting in Classification , 2016, bioRxiv.
[2] Russell A. Poldrack,et al. Dataset Decay: the problem of sequential analyses on open datasets , 2019, bioRxiv.
[3] J. Ioannidis. Publishing research with P-values: Prescribe more stringent statistical significance or proscribe statistical significance? , 2019, European heart journal.
[4] N. Lazar,et al. Moving to a World Beyond “p < 0.05” , 2019, The American Statistician.
[5] Sander Greenland,et al. Scientists rise up against statistical significance , 2019, Nature.
[6] David Gal,et al. Abandon Statistical Significance , 2017, The American Statistician.
[7] Aron K Barbey,et al. Small sample sizes reduce the replicability of task-based fMRI studies , 2018, Communications Biology.
[8] Martin A. Lindquist,et al. Effect Size and Power in fMRI Group Analysis , 2018, bioRxiv.
[9] John P. A. Ioannidis,et al. Mapping the universe of registered reports , 2018, Nature Human Behaviour.
[10] Daniel R. Little,et al. Small is beautiful: In defense of the small-N design , 2018, Psychonomic Bulletin & Review.
[11] Nicholas P. Holmes,et al. Justify your alpha , 2018, Nature Human Behaviour.
[12] Tor D Wager,et al. The relation between statistical power and inference in fMRI , 2017, PloS one.
[13] J. Ioannidis,et al. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature , 2017, PLoS biology.
[14] John P. A. Ioannidis,et al. A manifesto for reproducible science , 2017, Nature Human Behaviour.
[15] Thomas E. Nichols,et al. Scanning the horizon: towards transparent and reproducible neuroimaging research , 2016, Nature Reviews Neuroscience.
[16] Thomas E. Nichols,et al. Best practices in data analysis and sharing in neuroimaging using MRI , 2017, Nature Neuroscience.
[17] Christopher D. Chambers,et al. Redefine statistical significance , 2017, Nature Human Behaviour.
[18] J. Ioannidis,et al. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment , 2016, bioRxiv.
[19] Denes Szucs,et al. A Tutorial on Hunting Statistical Significance by Chasing N , 2016, Front. Psychol..
[20] Satrajit S. Ghosh,et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments , 2016, Scientific Data.
[21] Ruth Seurinck,et al. Power and sample size calculations for fMRI studies based on the prevalence of active peaks , 2016, bioRxiv.
[22] J. Ioannidis,et al. Evolution of Reporting P Values in the Biomedical Literature, 1990-2015. , 2016, JAMA.
[23] Scott D. Brown,et al. A purely confirmatory replication study of structural brain-behavior correlations , 2015, Cortex.
[24] J. Ioannidis,et al. Reproducibility in Science: Improving the Standard for Basic and Preclinical Research , 2015, Circulation research.
[25] Brian A. Nosek,et al. Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention , 2014, Trends in Cognitive Sciences.
[26] Ron Goeree,et al. The Reporting of Observational Clinical Functional Magnetic Resonance Imaging Studies: A Systematic Review , 2014, PloS one.
[27] John Suckling,et al. Are power calculations useful? A multicentre neuroimaging study , 2014, Human brain mapping.
[28] R. Tibshirani,et al. Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.
[29] Michael Ingre,et al. Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) , 2013, NeuroImage.
[30] Martin A. Lindquist,et al. Ironing out the statistical wrinkles in “ten ironic rules” , 2013, NeuroImage.
[31] J. Ioannidis,et al. Potential Reporting Bias in fMRI Studies of the Brain , 2013, PloS one.
[32] Brian A. Nosek,et al. Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.
[33] J. Ioannidis. Why Science Is Not Necessarily Self-Correcting , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.
[34] Joshua Carp,et al. The secret lives of experiments: Methods reporting in the fMRI literature , 2012, NeuroImage.
[35] J. Mumford. A power calculation guide for fMRI studies. , 2012, Social cognitive and affective neuroscience.
[36] Jennifer J. Richler,et al. Effect size estimates: current use, calculations, and interpretation. , 2012, Journal of experimental psychology. General.
[37] C. Glenn Begley,et al. Raise standards for preclinical cancer research , 2012 .
[38] Leif D. Nelson,et al. False-Positive Psychology , 2011, Psychological science.
[39] John P A Ioannidis,et al. Meta‐research: The art of getting it wrong , 2010, Research synthesis methods.
[40] T. Yarkoni. Big Correlations in Little Studies: Inflated fMRI Correlations Reflect Low Statistical Power—Commentary on Vul et al. (2009) , 2009, Perspectives on psychological science : a journal of the Association for Psychological Science.
[41] L. Hedges,et al. Introduction to Meta‐Analysis , 2009, International Coaching Psychology Review.
[42] J. Ioannidis. Why Most Discovered True Associations Are Inflated , 2008, Epidemiology.
[43] Nick F. Ramsey,et al. Within-subject variation in BOLD-fMRI signal changes across repeated measurements: Quantification and implications for sample size , 2008, NeuroImage.
[44] Thomas E. Nichols,et al. Power calculation for group fMRI studies accounting for arbitrary design and temporal autocorrelation , 2008, NeuroImage.
[45] S. Hayasaka,et al. Power and sample size calculation for neuroimaging studies by non-central random field theory , 2007, NeuroImage.
[46] Edgar Erdfelder,et al. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences , 2007, Behavior research methods.
[47] J. Ioannidis. Molecular evidence‐based medicine , 2007, European journal of clinical investigation.
[48] R. McGrath,et al. When effect sizes disagree: the case of r and d. , 2006, Psychological methods.
[49] J. Ioannidis. Why Most Published Research Findings Are False , 2005, PLoS medicine.
[50] J. Ioannidis. Contradicted and initially stronger effects in highly cited clinical research. , 2005, JAMA.
[51] Feng Li,et al. An Introduction to Metaanalysis , 2005 .
[52] Kevin Murphy,et al. An empirical investigation into the number of subjects required for an event-related fMRI study , 2004, NeuroImage.
[53] Gary H Glover,et al. Estimating sample size in functional MRI (fMRI) neuroimaging studies: Statistical power analyses , 2002, Journal of Neuroscience Methods.
[54] Thomas E. Nichols,et al. Statistical limitations in functional neuroimaging. II. Signal detection and statistical inference. , 1999, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.
[55] Karl J. Friston,et al. How Many Subjects Constitute a Study? , 1999, NeuroImage.
[56] G. Gigerenzer,et al. Do studies of statistical power have an effect on the power of studies , 1989 .
[57] E. S. Pearson,et al. On the Problem of the Most Efficient Tests of Statistical Hypotheses , 1933 .