Misleading Evidence and Evidence-Led Policy: Making Social Science more Experimental

Increasing demands by government for “evidence-led” policy raise the risk that research evidence will mislead government rather than leading to an unbiased conclusion. The need for unbiased research conclusions has never been greater, yet few consumers of research understand the statistical biases with which science must always struggle. This article introduces the volume's discussion of those issues with an explanation of the major threats of bias in social science research and a map of the differing scientific opinions on how to deal with those threats. The thesis of the volume is that many of these threats could be reduced by making social science more experimental. The fact that even experimental evidence contains threats of bias does not alter that claim but merely suggests another: that educated consumers of social science may be the best defense against misleading evidence of all kinds.

[1]  Katherine L Margo,et al.  Spinal manipulative therapy for low back pain. , 2005, American family physician.

[2]  L. Bauwens,et al.  Econometrics , 2005 .

[3]  David P. Farrington,et al.  British Randomized Experiments on Crime and Justice , 2003 .

[4]  Robert Boruch,et al.  Populating an International Web-Based Randomized Trials Register in the Social, Behavioral, Criminological, and Education Sciences , 2003 .

[5]  D. Gorman The ''science'' of drug and alcohol prevention: the case of the randomized trial of the Life Skills Training program , 2002 .

[6]  D. Green,et al.  The Effects of Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment , 2000, American Political Science Review.

[7]  Donald P. Green,et al.  The effects of canvassing, direct mail, and telephone contact on voter turnout: A field experiment , 2000 .

[8]  Juan Manuel Iranzo Amatriaín,et al.  The sociology of philosophies , 2000 .

[9]  M. Millenson Demanding medical excellence : doctors and accountability in the information age : with a new afterword , 1997 .

[10]  E. Rich,et al.  Atrial Flutter and Fibrillation: From Basic to Clinical Applications , 1998, Annals of Internal Medicine.

[11]  R. MacCoun,et al.  Biases in the interpretation and use of research results. , 1998, Annual review of psychology.

[12]  Michael L. Millenson Demanding medical excellence , 1997 .

[13]  I. Luckey : American Apartheid: Segregation and the Making of the Underclass , 1995 .

[14]  L. Hedges,et al.  The Handbook of Research Synthesis , 1995 .

[15]  R. Hanka The Handbook of Research Synthesis , 1994 .

[16]  Joshua D. Angrist,et al.  Identification of Causal Effects Using Instrumental Variables , 1993 .

[17]  Lawrence W. Sherman,et al.  Policing Domestic Violence: Experiments and Dilemmas , 1992 .

[18]  D. Massey American Apartheid: Segregation and the Making of the Underclass , 1990, American Journal of Sociology.

[19]  R Peto,et al.  Why do we need systematic overviews of randomized trials? , 1987, Statistics in medicine.

[20]  L. Delbeke Quasi-experimentation - design and analysis issues for field settings - cook,td, campbell,dt , 1980 .

[21]  T. Cook,et al.  Quasi-experimentation: Design & analysis issues for field settings , 1979 .

[22]  M. Pike,et al.  Design and analysis of randomized clinical trials requiring prolonged observation of each patient. II. analysis and examples. , 1977, British Journal of Cancer.

[23]  G. Glass Primary, Secondary, and Meta-Analysis of Research1 , 1976 .

[24]  H. W. Fowler,et al.  The shorter Oxford English dictionary on historical principles , 2016 .

[25]  I NICOLETTI,et al.  The Planning of Experiments , 1936, Rivista di clinica pediatrica.

[26]  J. I The Design of Experiments , 1936, Nature.