Inflation von falsch-positiven Befunden in der psychologischen Forschung : mögliche Ursachen und Gegenmaßnahmen

In letzter Zeit mehren sich Hinweise darauf, dass gehauft falsch-positive Befunde in wissenschaftlichen Publikationen berichtet werden und so die Forschungsliteratur ein verzerrtes Bild der Realitat widerspiegelt. Das Fachkollegium Psychologie der Deutschen Forschungsgemeinschaft hat dieses Problem aufgegriffen und die moglichen Ursachen von falsch-positiv Befunden diskutiert. Dieser Artikel gibt den Inhalt dieser Diskussion wieder und mochte Antragssteller auffordern, diese Problematik bei Forschungsantragen starker zu beachten. Auch appellieren wir an Antragsteller, Gutachter und Herausgeber, den Stellenwert von negativen Befunden sowie von Replikationen bei Forschungsantragen und wissenschaftlichen Arbeiten einschlieslich klinischer Studien starker zu berucksichtigen.

[1]  Gregory Francis,et al.  Too good to be true: Publication bias in two prominent studies from experimental psychology , 2012, Psychonomic Bulletin & Review.

[2]  Klaus Fiedler,et al.  The Long Way From α-Error Control to Validity Proper , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[3]  Matthew C. Makel,et al.  Replications in Psychology Research , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[4]  Jeffrey T. Leek,et al.  Statistics: P values are just the tip of the iceberg , 2015, Nature.

[5]  David Colquhoun,et al.  An investigation of the false discovery rate and the misinterpretation of p-values , 2014, Royal Society Open Science.

[6]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[7]  John P. A. Ioannidis,et al.  Research: increasing value, reducing waste 2 , 2014 .

[8]  Paul W. Eastwick,et al.  Best research practices in psychology: Illustrating epistemological and pragmatic considerations with the case of relationship science. , 2015, Journal of personality and social psychology.

[9]  U. Dirnagl,et al.  Biomedical research: increasing value, reducing waste , 2014, The Lancet.

[10]  I. Olkin,et al.  The case of the misleading funnel plot , 2006, BMJ : British Medical Journal.

[11]  M. Maraun,et al.  Killeen's (2005) p rep coefficient: logical and mathematical problems. , 2010, Psychological methods.

[12]  G. Loewenstein,et al.  Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling , 2012, Psychological science.

[13]  It really just does not follow, comments on Francis (2013) , 2013 .

[14]  V. Johnson Revised standards for statistical evidence , 2013, Proceedings of the National Academy of Sciences.

[15]  D. Fanelli “Positive” Results Increase Down the Hierarchy of the Sciences , 2010, PloS one.

[16]  E. Wagenmakers,et al.  Why psychologists must change the way they analyze their data: the case of psi: comment on Bem (2011). , 2011, Journal of personality and social psychology.

[17]  J. Ioannidis,et al.  An exploratory test for an excess of significant findings , 2007, Clinical trials.

[18]  Edgar Erdfelder,et al.  G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences , 2007, Behavior research methods.

[19]  H. Pashler,et al.  Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition 1 , 2009, Perspectives on psychological science : a journal of the Association for Psychological Science.

[20]  D. Fanelli How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data , 2009, PloS one.

[21]  T. Sterling Publication Decisions and their Possible Effects on Inferences Drawn from Tests of Significance—or Vice Versa , 1959 .

[22]  Jeffrey N. Rouder,et al.  Bayesian t tests for accepting and rejecting the null hypothesis , 2009, Psychonomic bulletin & review.

[23]  Theodor D. Sterling,et al.  Publication decisions revisited: the effect of the outcome of statistical tests on the decision to p , 1995 .

[24]  U. Schimmack The ironic effect of significant results on the credibility of multiple-study articles. , 2012, Psychological methods.

[25]  R. Rosenthal The file drawer problem and tolerance for null results , 1979 .

[26]  C. Begley,et al.  Drug development: Raise standards for preclinical cancer research , 2012, Nature.

[27]  Leif D. Nelson,et al.  False-Positive Psychology , 2011, Psychological science.

[28]  R. Tibshirani,et al.  Increasing value and reducing waste in research design, conduct, and analysis , 2014, The Lancet.

[29]  G. Francis,et al.  Excess Success for Psychology Articles in the Journal Science , 2014, PloS one.

[30]  G. Francis Publication bias and the failure of replication in experimental psychology , 2012, Psychonomic bulletin & review.

[31]  David M. Lane,et al.  Estimating effect size: Bias resulting from the significance criterion in editorial decisions , 1978 .

[32]  D. Fanelli Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data , 2010, PloS one.

[33]  Leif D. Nelson,et al.  P-Curve: A Key to the File Drawer , 2013, Journal of experimental psychology. General.

[34]  David Moher,et al.  Reducing waste from incomplete or unusable reports of biomedical research , 2014, The Lancet.

[35]  Jacob Cohen,et al.  A power primer. , 1992, Psychological bulletin.

[36]  Jeffrey R. Spies,et al.  The Replication Recipe: What Makes for a Convincing Replication? , 2014 .

[37]  P. Killeen,et al.  An Alternative to Null-Hypothesis Significance Tests , 2005, Psychological science.

[38]  Brian A. Nosek,et al.  Recommendations for Increasing Replicability in Psychology † , 2013 .

[39]  K. Fiedler Voodoo Correlations Are Everywhere—Not Only in Neuroscience , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.

[40]  Edgar Erdfelder,et al.  A New Strategy for Testing Structural Equation Models , 2016 .

[41]  R. Smart The importance of negative results in psychological research. , 1964 .

[42]  D. Lakens,et al.  Sailing From the Seas of Chaos Into the Corridor of Stability , 2014, Perspectives on psychological science : a journal of the Association for Psychological Science.

[43]  H. Pashler,et al.  Is the Replicability Crisis Overblown? Three Arguments Examined , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[44]  N. Kerr HARKing: Hypothesizing After the Results are Known , 1998, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[45]  J. Margraf Zur Lage der Psychologie , 2015 .

[46]  Arndt Bröder,et al.  Result-Blind Peer Reviews and Editorial Decisions A Missing Pillar of Scientific Culture , 2013 .

[47]  K. Fiedler,et al.  Questionable Research Practices Revisited , 2016 .

[48]  Harlan M Krumholz,et al.  Increasing value and reducing waste: addressing inaccessible research , 2014, The Lancet.

[49]  J. Wicherts,et al.  The Rules of the Game Called Psychological Science , 2012, Perspectives on psychological science : a journal of the Association for Psychological Science.

[50]  Jeff Miller What is the probability of replicating a statistically significant effect? , 2009, Psychonomic bulletin & review.

[51]  Brian A. Nosek,et al.  Power failure: why small sample size undermines the reliability of neuroscience , 2013, Nature Reviews Neuroscience.

[52]  Jeff Miller,et al.  Aggregate and individual replication probability within an explicit model of the research process. , 2011, Psychological methods.