How to Identify and How to Conduct Research that Is Informative and Reproducible

The expectations that PhD students face in the social sciences have been increasing steadily. At the same time, the behavioral and social sciences have faced a substantial turmoil questioning the reproducibility of research findings and the way how researchers should perform research. In the present chapter, we provide guidance on possible ways how PhD students can make sense of this turmoil or “sail from the seas of chaos into the corridors of stability” (Lakens & Evers, 2014). We review recent developments in the social sciences, provide information about tools and methods to evaluate the validity and quality of published research, and offer suggestions on different ways to enhance the informational value of one’s own research by focusing on aspects such as preregistrations, power and accuracy, effect sizes, open science, replications, and underused methodological techniques such as Bayesian inference. These tools and suggestions cannot replace a good theory or reliable and valid measurement tools, but they can be a first step towards more informative and reproducible social sciences.

[1]  Brian A. Nosek,et al.  Promoting Transparency in Social Science Research , 2014, Science.

[2]  Hristos Doucouliagos,et al.  Meta‐regression approximations to reduce publication selection bias , 2014, Research synthesis methods.

[3]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[4]  P. Lee,et al.  Publication bias in meta-analysis: its causes and consequences. , 2000, Journal of clinical epidemiology.

[5]  K. Dickersin The existence of publication bias and risk factors for its occurrence. , 1990, JAMA.

[6]  Etienne P. LeBel,et al.  A Unified Framework to Quantify the Credibility of Scientific Findings , 2018, Advances in Methods and Practices in Psychological Science.

[7]  A. C. Elms The crisis of confidence in social psychology. , 1975 .

[8]  N. Kerr HARKing: Hypothesizing After the Results are Known , 1998, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[9]  Jacob Cohen The earth is round (p < .05) , 1994 .

[10]  Jelte M. Wicherts,et al.  Researchers’ Intuitions About Power in Psychological Research , 2016, Psychological science.

[11]  D. Heisey,et al.  The Abuse of Power , 2001 .

[12]  L. John,et al.  Toward Transparent Reporting of Psychological Science , 2017 .

[13]  Michèle B. Nuijten,et al.  statcheck: Extract statistics from articles and recompute p values (R package version 1.0.0.) , 2014 .

[14]  Stephen D. Short,et al.  Determining Power and Sample Size for Simple and Complex Mediation Models , 2017 .

[15]  Wolfgang Viechtbauer,et al.  Conducting Meta-Analyses in R with the metafor Package , 2010 .

[16]  D Stephen Lindsay,et al.  Sharing Data and Materials in Psychological Science , 2017, Psychological science.

[17]  Leif D. Nelson,et al.  P-Curve: A Key to the File Drawer , 2013, Journal of experimental psychology. General.

[18]  Leif D. Nelson,et al.  Psychology's Renaissance , 2018, Annual review of psychology.

[19]  Gilles E. Gignac,et al.  Effect size guidelines for individual differences researchers , 2016 .

[20]  Han L. J. van der Maas,et al.  Science Perspectives on Psychological an Agenda for Purely Confirmatory Research on Behalf Of: Association for Psychological Science , 2022 .

[21]  Leif D. Nelson,et al.  A 21 Word Solution , 2012 .

[22]  Kai J. Jonas,et al.  How can preregistration contribute to research in our field , 2016 .

[23]  Michèle B. Nuijten,et al.  The Replication Paradox: Combining Studies can Decrease Accuracy of Effect Size Estimates , 2015 .

[24]  Peter Green,et al.  SIMR: an R package for power analysis of generalized linear mixed models by simulation , 2016 .

[25]  A. Lupia,et al.  Openness in Political Science: Data Access and Research Transparency , 2013, PS: Political Science &amp; Politics.

[26]  E. Wagenmakers,et al.  Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011). , 2011, Journal of personality and social psychology.

[27]  Barbara A. Spellman,et al.  A Short (Personal) Future History of Revolution 2.0 , 2015, Perspectives on psychological science : a journal of the Association for Psychological Science.

[28]  W. W. Rozeboom The fallacy of the null-hypothesis significance test. , 1960, Psychological bulletin.

[29]  Joachim Vandekerckhove,et al.  Editorial: Bayesian methods for advancing psychological science , 2018, Psychonomic bulletin & review.

[30]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[31]  Alexander Etz,et al.  Making replication mainstream , 2017, Behavioral and Brain Sciences.

[32]  W. Vanpaemel,et al.  Are We Wasting a Good Crisis? The Availability of Psychological Research Data after the Storm , 2015 .

[33]  John P A Ioannidis,et al.  Ensuring the integrity of clinical practice guidelines: a tool for protecting patients , 2013, BMJ : British Medical Journal.

[34]  Anne M. Scheel,et al.  Equivalence Testing for Psychological Research: A Tutorial , 2018, Advances in Methods and Practices in Psychological Science.

[35]  C. Chambers Registered Reports: A new publishing initiative at Cortex , 2013, Cortex.

[36]  Reginald B. Adams,et al.  Investigating Variation in Replicability: A “Many Labs” Replication Project , 2014 .

[37]  Ken Kelley,et al.  Sample-Size Planning for More Accurate Statistical Power: A Method Adjusting Sample Effect Sizes for Publication Bias and Uncertainty , 2017, Psychological science.

[38]  R. Rosenthal The file drawer problem and tolerance for null results , 1979 .

[39]  Alex O Holcombe,et al.  An Introduction to Registered Replication Reports at Perspectives on Psychological Science , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[40]  Zoltan Dienes,et al.  Using Bayes to get the most out of non-significant results , 2014, Front. Psychol..

[41]  C. Tenopir,et al.  Data Sharing by Scientists: Practices and Perceptions , 2011, PloS one.

[42]  Simine Vazire,et al.  The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power , 2014, PloS one.

[43]  R. Giner-Sorolla,et al.  Pre-registration in social psychology—A discussion and suggested template , 2016 .

[44]  D. A. Kenny,et al.  Experiments with More Than One Random Factor: Designs, Analytic Models, and Statistical Power , 2017, Annual review of psychology.

[45]  W. Levelt,et al.  Flawed science: The fraudulent research practices of social psychologist Diederik Stapel , 2012 .

[46]  F. Dablander,et al.  How to become a Bayesian in eight easy steps: An annotated reading list , 2018, Psychonomic bulletin & review.

[47]  Jeffrey R. Spies,et al.  The Replication Recipe: What Makes for a Convincing Replication? , 2014 .

[48]  Brian A. Nosek,et al.  Recommendations for Increasing Replicability in Psychology † , 2013 .

[49]  R. Grissom Probability of the superior outcome of one treatment over another. , 1994 .

[50]  Brian A. Nosek,et al.  Registered Reports A Method to Increase the Credibility of Published Results , 2014 .

[51]  Felix D. Schönbrodt,et al.  Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods , 2019, Advances in Methods and Practices in Psychological Science.

[52]  Brian A. Nosek,et al.  The preregistration revolution , 2018, Proceedings of the National Academy of Sciences.

[53]  Michèle B. Nuijten,et al.  The prevalence of statistical reporting errors in psychology (1985–2013) , 2015, Behavior Research Methods.

[54]  Easy preregistration will benefit any research , 2018, Nature Human Behaviour.

[55]  Joseph R. Rausch,et al.  Sample size planning for the standardized mean difference: accuracy in parameter estimation via narrow confidence intervals. , 2006, Psychological methods.

[56]  J. Wicherts,et al.  The Rules of the Game Called Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[57]  A. Greenwald Consequences of Prejudice Against the Null Hypothesis , 1975 .

[58]  Rolf A. Zwaan,et al.  Registered Replication Report , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[59]  D. Lakens Equivalence Tests , 2017, Social psychological and personality science.

[60]  James A. J. Heathers,et al.  The GRIM Test , 2017 .

[61]  Michael C. Frank,et al.  A practical guide for transparency in psychological science , 2018 .

[62]  J. Ioannidis,et al.  Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature , 2017, PLoS biology.

[63]  J. Krueger,et al.  Testing Significance Testing , 2018 .

[64]  G. Gigerenzer,et al.  Do studies of statistical power have an effect on the power of studies , 1989 .

[65]  John P. A. Ioannidis,et al.  p-Curve and p-Hacking in Observational Research , 2016, PloS one.

[66]  D. Lakens Performing High-Powered Studies Efficiently with Sequential Analyses , 2014 .

[67]  Michael C. Frank,et al.  Estimating the reproducibility of psychological science , 2015, Science.

[68]  Leif D. Nelson,et al.  Better P-curves: Making P-curve analysis more robust to errors, fraud, and ambitious P-hacking, a Reply to Ulrich and Miller (2015). , 2015, Journal of experimental psychology. General.

[69]  Ken Kelley,et al.  Sample Size Planning with Applications to Multiple Regression: Power and Accuracy for Omnibus and Targeted Effects , 2008 .

[70]  Susann Fiedler,et al.  Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency , 2016, PLoS biology.

[71]  Ulf Böckenholt,et al.  Adjusting for Publication Bias in Meta-Analysis , 2016, Perspectives on psychological science : a journal of the Association for Psychological Science.

[72]  Edgar Erdfelder,et al.  G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences , 2007, Behavior research methods.

[73]  Gregory Francis,et al.  The Psychology of Replication and Replication in Psychology , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[74]  U. Schimmack The ironic effect of significant results on the credibility of multiple-study articles. , 2012, Psychological methods.

[75]  Lorne Campbell,et al.  INTERPERSONAL RELATIONS AND GROUP PROCESSES Benefits of Open and High-Powered Research Outweigh Costs , 2017 .

[76]  E. Wagenmakers,et al.  Quantifying Support for the Null Hypothesis in Psychology: An Empirical Investigation , 2018, Advances in Methods and Practices in Psychological Science.

[77]  M. Lee,et al.  Bayesian Cognitive Modeling: A Practical Course , 2014 .

[78]  P. Meehl Theory-Testing in Psychology and Physics: A Methodological Paradox , 1967, Philosophy of Science.

[79]  D. Lakens,et al.  Sailing From the Seas of Chaos Into the Corridor of Stability , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[80]  D. Lakens,et al.  When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias , 2018 .

[81]  Jacob Cohen,et al.  The statistical power of abnormal-social psychological research: a review. , 1962, Journal of abnormal and social psychology.

[82]  S. Maxwell The persistence of underpowered studies in psychological research: causes, consequences, and remedies. , 2004, Psychological methods.

[83]  Brian A. Nosek,et al.  Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.

[84]  Jacob Cohen Statistical Power Analysis , 1992 .

[85]  Brian A. Nosek,et al.  Promoting an open research culture , 2015, Science.

[86]  S Duval,et al.  Trim and Fill: A Simple Funnel‐Plot–Based Method of Testing and Adjusting for Publication Bias in Meta‐Analysis , 2000, Biometrics.

[87]  Jennifer J. Richler,et al.  Effect size estimates: current use, calculations, and interpretation. , 2012, Journal of experimental psychology. General.

[88]  D. Fanelli “Positive” Results Increase Down the Hierarchy of the Sciences , 2010, PloS one.

[89]  Ken Kelley,et al.  Sample size for multiple regression: obtaining regression coefficients that are accurate, not simply significant. , 2003, Psychological methods.

[90]  Jin X. Goh,et al.  Mini Meta-Analysis of Your Own Studies: Some Arguments on Why and a Primer on How , 2016 .

[91]  Lorne Campbell,et al.  Registered Replication Report , 2016, Perspectives on psychological science : a journal of the Association for Psychological Science.

[92]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[93]  P. Lachenbruch Statistical Power Analysis for the Behavioral Sciences (2nd ed.) , 1989 .

[94]  E. Eich Business Not as Usual , 2014, Psychological science.

[95]  Daniël Lakens,et al.  Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs , 2013, Front. Psychol..

[96]  D. Lakens,et al.  Too True to be Bad , 2017, Social psychological and personality science.

[97]  S. Maxwell,et al.  Is psychology suffering from a replication crisis? What does "failure to replicate" really mean? , 2015, The American psychologist.

[98]  Felix D. Schönbrodt,et al.  Sequential Hypothesis Testing With Bayes Factors: Efficiently Testing Mean Differences , 2017, Psychological methods.

[99]  Christopher R. Chartier,et al.  The Psychological Science Accelerator: Advancing Psychology Through a Distributed Collaborative Network , 2018, Advances in methods and practices in psychological science.

[100]  H. Pashler,et al.  Editors’ Introduction to the Special Section on Replicability in Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[101]  Neil Malhotra,et al.  Publication bias in the social sciences: Unlocking the file drawer , 2014, Science.

[102]  G. Cumming,et al.  Researchers misunderstand confidence intervals and standard error bars. , 2005, Psychological methods.

[103]  G. Loewenstein,et al.  Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling , 2012, Psychological science.

[104]  Matthew C. Makel,et al.  Replications in Psychology Research , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[105]  E. Wagenmakers A practical solution to the pervasive problems ofp values , 2007, Psychonomic bulletin & review.

[106]  Joseph R. Rausch,et al.  Sample size planning for statistical power and accuracy in parameter estimation. , 2008, Annual review of psychology.

[107]  D. Lakens,et al.  Rewarding Replications , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[108]  Rolf A. Zwaan,et al.  Registered Replication Report , 2016, Perspectives on psychological science : a journal of the Association for Psychological Science.

[109]  D. Bem Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. , 2011, Journal of personality and social psychology.

[110]  Joel B. Greenhouse,et al.  Selection Models and the File Drawer Problem , 1988 .