Sample size evolution in neuroimaging research: An evaluation of highly-cited studies (1990–2012) and of latest practices (2017–2018) in high-impact journals
暂无分享,去创建一个
[1] Russell A. Poldrack,et al. Dataset Decay: the problem of sequential analyses on open datasets , 2019, bioRxiv.
[2] J. Ioannidis. Publishing research with P-values: Prescribe more stringent statistical significance or proscribe statistical significance? , 2019, European heart journal.
[3] N. Lazar,et al. Moving to a World Beyond “p < 0.05” , 2019, The American Statistician.
[4] Sander Greenland,et al. Scientists rise up against statistical significance , 2019, Nature.
[5] John P. A. Ioannidis. Why Most Published Research Findings Are False , 2019, CHANCE.
[6] Aron K Barbey,et al. Small sample sizes reduce the replicability of task-based fMRI studies , 2018, Communications Biology.
[7] Martin A. Lindquist,et al. Effect Size and Power in fMRI Group Analysis , 2018, bioRxiv.
[8] J. Ioannidis,et al. Mapping the universe of registered reports , 2018, Nature Human Behaviour.
[9] Daniel R. Little,et al. Small is beautiful: In defense of the small-N design , 2018, Psychonomic Bulletin & Review.
[10] Nicholas P. Holmes,et al. Justify your alpha , 2018, Nature Human Behaviour.
[11] Tor D Wager,et al. The relation between statistical power and inference in fMRI , 2017, PloS one.
[12] David Gal,et al. Abandon Statistical Significance , 2017, The American Statistician.
[13] Christopher D. Chambers,et al. Redefine statistical significance , 2017, Nature Human Behaviour.
[14] Thomas E. Nichols,et al. Best practices in data analysis and sharing in neuroimaging using MRI , 2017, Nature Neuroscience.
[15] John P. A. Ioannidis,et al. A manifesto for reproducible science , 2017, Nature Human Behaviour.
[16] J. Ioannidis,et al. When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment , 2016, bioRxiv.
[17] Howard Bowman,et al. I Tried a Bunch of Things: The Dangers of Unexpected Overfitting in Classification , 2016, bioRxiv.
[18] Denes Szucs,et al. A Tutorial on Hunting Statistical Significance by Chasing N , 2016, Front. Psychol..
[19] J. Ioannidis,et al. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature , 2016, bioRxiv.
[20] Thomas E. Nichols,et al. Best Practices in Data Analysis and Sharing in Neuroimaging using MRI , 2016, bioRxiv.
[21] Thomas E. Nichols,et al. Scanning the horizon: towards transparent and reproducible neuroimaging research , 2016, Nature Reviews Neuroscience.
[22] Satrajit S. Ghosh,et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments , 2016, Scientific Data.
[23] Ruth Seurinck,et al. Power and sample size calculations for fMRI studies based on the prevalence of active peaks , 2016, bioRxiv.
[24] J. Ioannidis,et al. Evolution of Reporting P Values in the Biomedical Literature, 1990-2015. , 2016, JAMA.
[25] Scott D. Brown,et al. A purely confirmatory replication study of structural brain-behavior correlations , 2015, Cortex.
[26] J. Ioannidis,et al. Reproducibility in Science: Improving the Standard for Basic and Preclinical Research , 2015, Circulation research.
[27] Brian A. Nosek,et al. Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention , 2014, Trends in Cognitive Sciences.
[28] Ron Goeree,et al. The Reporting of Observational Clinical Functional Magnetic Resonance Imaging Studies: A Systematic Review , 2014, PloS one.
[29] John Suckling,et al. Are power calculations useful? A multicentre neuroimaging study , 2014, Human brain mapping.
[30] R. Tibshirani,et al. Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.
[31] Michael Ingre,et al. Why small low-powered studies are worse than large high-powered studies and how to protect against “trivial” findings in research: Comment on Friston (2012) , 2013, NeuroImage.
[32] Martin A. Lindquist,et al. Ironing out the statistical wrinkles in “ten ironic rules” , 2013, NeuroImage.
[33] J. Ioannidis,et al. Potential Reporting Bias in fMRI Studies of the Brain , 2013, PloS one.
[34] Brian A. Nosek,et al. Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.
[35] J. Ioannidis. Why Science Is Not Necessarily Self-Correcting , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.
[36] J. Carp. The secret lives of experiments: Methods reporting in the fMRI literature , 2012, NeuroImage.
[37] J. Mumford. A power calculation guide for fMRI studies. , 2012, Social cognitive and affective neuroscience.
[38] C. Begley,et al. Drug development: Raise standards for preclinical cancer research , 2012, Nature.
[39] Jennifer J. Richler,et al. Effect size estimates: current use, calculations, and interpretation. , 2012, Journal of experimental psychology. General.
[40] Leif D. Nelson,et al. False-Positive Psychology , 2011, Psychological science.
[41] John P A Ioannidis,et al. Meta‐research: The art of getting it wrong , 2010, Research synthesis methods.
[42] T. Yarkoni. Big Correlations in Little Studies: Inflated fMRI Correlations Reflect Low Statistical Power—Commentary on Vul et al. (2009) , 2009, Perspectives on psychological science : a journal of the Association for Psychological Science.
[43] L. Hedges,et al. Introduction to Meta‐Analysis , 2009, International Coaching Psychology Review.
[44] J. Ioannidis. Why Most Discovered True Associations Are Inflated , 2008, Epidemiology.
[45] Nick F. Ramsey,et al. Within-subject variation in BOLD-fMRI signal changes across repeated measurements: Quantification and implications for sample size , 2008, NeuroImage.
[46] Thomas E. Nichols,et al. Power calculation for group fMRI studies accounting for arbitrary design and temporal autocorrelation , 2008, NeuroImage.
[47] S. Hayasaka,et al. Power and sample size calculation for neuroimaging studies by non-central random field theory , 2007, NeuroImage.
[48] Edgar Erdfelder,et al. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences , 2007, Behavior research methods.
[49] J. Ioannidis. Molecular evidence‐based medicine , 2007, European journal of clinical investigation.
[50] R. McGrath,et al. When effect sizes disagree: the case of r and d. , 2006, Psychological methods.
[51] J. Ioannidis. Contradicted and initially stronger effects in highly cited clinical research. , 2005, JAMA.
[52] Kevin Murphy,et al. An empirical investigation into the number of subjects required for an event-related fMRI study , 2004, NeuroImage.
[53] Gary H Glover,et al. Estimating sample size in functional MRI (fMRI) neuroimaging studies: Statistical power analyses , 2002, Journal of Neuroscience Methods.
[54] Thomas E. Nichols,et al. Statistical limitations in functional neuroimaging. II. Signal detection and statistical inference. , 1999, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.
[55] Karl J. Friston,et al. How Many Subjects Constitute a Study? , 1999, NeuroImage.
[56] G. Gigerenzer,et al. Do studies of statistical power have an effect on the power of studies , 1989 .
[57] Feng Li,et al. An Introduction to Metaanalysis , 2005 .
[58] E. S. Pearson,et al. On the Problem of the Most Efficient Tests of Statistical Hypotheses , 1933 .