Introduction to the Special Section on Advancing Our Methods and Practices

Psychological science is in the midst of a sea change. Over the last few years, our field’s confidence in the status quo has been shaken by a number of largely unrelated events that happened to coincide—Jonah Lehrer’s (2010) widely read New Yorker article on the effects of publication bias in science, Bem’s (2011) controversial paper on precognition, a rising concern about direct replication, the Stapel fraud case (Tilburg University, 2011), and the publication of several troubling critiques of current practices in research and publishing (e.g., Simmons, Nelson, & Simonsohn, 2011; Vul, Harris, Winkielman, & Pashler, 2009). Although the ensuing crisis of confidence was by no means the first that psychology has faced (see e.g., Rosenthal & Rubin, 1978; Sears, 1986; Wicker, 1969), this one seems to have resonated more widely and more deeply. The convergence of events within our field was situated within a broader context of similar issues emerging across a range of scientific disciplines, from cancer research to genetics to neuroscience (e.g., Begley & Ellis, 2012; Button et al., 2013; Fanelli, 2012; Ioannidis, 2005; Moonesinghe, Khoury, & Janssens, 2007). Meanwhile, online communication, media attention, and a series of conference symposia and journal issues kept the critiques and concerns front and center. The first wave of responses to the sense of crisis understandably focused on problems—many of which had been raised before and even repeatedly (Cohen, 1992; Greenwald, 1975; Maxwell, 2004; Rosenthal, 1979)—but which in this new context seemed more urgently and insistently to demand the field’s consideration. A chorus of critiques focused our attention on issues of publication bias, underpowered studies, replication, flashy findings, and questionable research practices (e.g., Bakker, van Dijk, & Wicherts, 2012; John, Loewenstein, & Prelec, 2012; Ledgerwood & Sherman, 2012; Nosek, Spies, & Motyl, 2012; Pashler & Wagenmakers, 2012). Some embraced these critiques wholeheartedly, whereas others pushed back, arguing that some of the problems were overstated or oversimplified. This first wave of responses was loud enough and big enough to overcome the inevitable inertia of an existing system and propel the field into forward motion. We can and surely should debate which problems are most pressing and which solutions most suitable (e.g., Cesario, 2014; Fiedler, Kutzner, & Krueger, 2012; Murayama, Pekrun, & Fiedler, 2013; Stroebe & Strack, 2014). But at this point, most can agree that there are some real problems with the status quo. Many researchers feel poised to change their current practices in an effort to improve our science. Already, new initiatives and journal policies have started moving the field forward to meet some of the recently articulated challenges head on (Chambers & Munafo, 2013; Eich, 2014; LeBel et al., 2013; Open Science Framework, 2014; PSPB, 2014; Spellman, 2013). It is in many ways an exciting time: Our momentum has placed psychological science at the forefront of a broader movement to improve standards and practices across scientific disciplines (see e.g., McNutt, 2014). But of course, change also involves uncertainty. For the average researcher or student standing on the shifting sands of new journal policies, conflicting reviewer standards, and ongoing debates about best practices, the view can seem rather turbulent. One might reasonably wonder, “What should I be doing differently in my own research? Do I really need to triple all my sample sizes? Is it ever okay to peek at my data? What should I conclude when I run the same study twice and get different results? And in the midst of all of this, how should I adapt my own expectations as a reviewer or editor . . . and what can I expect from others reviewing my manuscripts?” This special section brings together a collection of articles that address exactly these kinds of questions. The goal is to provide a concrete set of practical best practices—that is, things we can change right now about the way we conduct and evaluate research that will make our science better. The section opens with an overview of cutting-edge tools that enable researchers to increase the evidential value of their studies (Lakens & Evers, 2014, 529448 PPSXXX10.1177/1745691614529448LedgerwoodIntroduction to the Special Section research-article2014

[1]  H. Pashler,et al.  Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition 1 , 2009, Perspectives on psychological science : a journal of the Association for Psychological Science.

[2]  S. Hewitt,et al.  Reproducibility , 2019, Encyclopedia of Social Network Analysis and Mining. 2nd Ed..

[3]  F. Thoemmes,et al.  Continuously Cumulating Meta-Analysis and Replicability , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[4]  Alex O. Holcombe,et al.  Badges to Acknowledge Open Practices , 2013 .

[5]  Reinhard Pekrun,et al.  Research Practices That Can Prevent an Inflation of False-Positive Rates , 2014, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[6]  Indiana Libraries Manuscript Submission Guidelines , 2007 .

[7]  D. Lakens,et al.  Sailing From the Seas of Chaos Into the Corridor of Stability , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[8]  W. Stroebe,et al.  The Alleged Crisis and the Illusion of Exact Replication , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[9]  Marco Perugini,et al.  Safeguard Power as a Protection Against Imprecise Power Estimates , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[10]  A. Ledgerwood,et al.  Short, Sweet, and Problematic? The Rise of the Short Report in Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[11]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[12]  Daniele Fanelli,et al.  Negative results are disappearing from most disciplines and countries , 2011, Scientometrics.

[13]  D. O. Sears College sophomores in the laboratory: Influences of a narrow data base on social psychology's view of human nature. , 1986 .

[14]  M. Khoury,et al.  Most Published Research Findings Are False—But a Little Replication Goes a Long Way , 2007, PLoS medicine.

[15]  Joseph Cesario Priming, Replication, and the Hardest Science , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[16]  E. Eich Business Not as Usual , 2014, Psychological science.

[17]  A. Greenwald Consequences of Prejudice Against the Null Hypothesis , 1975 .

[18]  Brad J. Sagarin,et al.  An Ethical Approach to Peeking at Data , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[19]  Brian A. Nosek,et al.  Scientific Utopia , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[20]  Donald B. Rubin,et al.  Interpersonal expectancy effects: the first 345 studies , 1978, Behavioral and Brain Sciences.

[21]  C. Begley,et al.  Drug development: Raise standards for preclinical cancer research , 2012, Nature.

[22]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[23]  Jacob Cohen,et al.  A power primer. , 1992, Psychological bulletin.

[24]  Klaus Fiedler,et al.  The Long Way From α-Error Control to Validity Proper , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[25]  G. Loewenstein,et al.  Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling , 2012, Psychological science.

[26]  D. Bem Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. , 2011, Journal of personality and social psychology.

[27]  A. W. Wicker Attitudes Versus Actions: The Relationship of Verbal and Overt Behavioral Responses to Attitude Objects. , 1969 .

[28]  R. Rosenthal The file drawer problem and tolerance for null results , 1979 .

[29]  S. Maxwell The persistence of underpowered studies in psychological research: causes, consequences, and remedies. , 2004, Psychological methods.

[30]  J. Wicherts,et al.  The Rules of the Game Called Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[31]  Brian A. Nosek,et al.  Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.