Nudging transparent behavioural science and policy

There are inherent differences in the priorities of academics and policy-makers. These pose unique challenges for teams such as the Behavioural Insights Team (BIT), which has positioned itself as an organisation conducting academically rigorous behavioural science research in policy settings. Here we outline the threats to research transparency and reproducibility that stem from working with policy-makers and other non-academic stakeholders. These threats affect how we perform, communicate, verify and evaluate research. Solutions that increase research transparency include pre-registering study protocols, making data open and publishing summaries of results. We suggest an incentive structure (a simple ‘nudge’) that rewards BIT's non-academic partners for engaging in these practices.

[1]  D. Moher,et al.  CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials , 2010, Journal of pharmacology & pharmacotherapeutics.

[2]  F. Greaves,et al.  Provision of social norm feedback to high prescribers of antibiotics in general practice: a pragmatic national randomised controlled trial , 2016, The Lancet.

[3]  G. Piaggio,et al.  Consort 2010 statement: extension to cluster randomised trials , 2012, BMJ : British Medical Journal.

[4]  J. Ioannidis,et al.  Registering diagnostic and prognostic trials of tests: is it the right thing to do? , 2014, Clinical chemistry.

[5]  Neil Malhotra,et al.  Publication bias in the social sciences: Unlocking the file drawer , 2014, Science.

[6]  Ariel Deardorff,et al.  Open Science Framework (OSF) , 2017, Journal of the Medical Library Association : JMLA.

[7]  Brian A. Nosek,et al.  Promoting an open research culture , 2015, Science.

[8]  Nikolaus Kriegeskorte,et al.  Open Evaluation: A Vision for Entirely Transparent Post-Publication Peer Review and Rating for Science , 2012, Front. Comput. Neurosci..

[9]  L. HARKing: Hypothesizing After the Results are Known , 2002 .

[10]  John P. A. Ioannidis,et al.  A manifesto for reproducible science , 2017, Nature Human Behaviour.

[11]  J. Ioannidis Why Most Published Research Findings Are False , 2005, PLoS medicine.

[12]  John P. A. Ioannidis,et al.  How to Make More Published Research True , 2014, PLoS medicine.

[13]  R. Rosenthal The file drawer problem and tolerance for null results , 1979 .

[14]  Norbert Schmitz,et al.  Publication bias: what are the challenges and can they be overcome? , 2012, Journal of psychiatry & neuroscience : JPN.

[15]  O. Hauser,et al.  Failure to CAPTCHA Attention: Null Results from an Honesty Priming Experiment in Guatemala , 2017, Behavioral sciences.

[16]  S. Goodman,et al.  Meta-research: Evaluation and Improvement of Research Methods and Practices , 2015, PLoS biology.

[17]  Susann Fiedler,et al.  Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method for Increasing Transparency , 2016, PLoS biology.

[18]  J. Ioannidis Why Most Published Research Findings Are False , 2019, CHANCE.

[19]  M. Hallsworth,et al.  Behavioural science and policy: where are we now and where are we going? , 2018, Behavioural Public Policy.

[20]  John P. A. Ioannidis,et al.  What does research reproducibility mean? , 2016, Science Translational Medicine.

[21]  Brian A. Nosek,et al.  Publication and other reporting biases in cognitive sciences: detection, prevalence, and prevention , 2014, Trends in Cognitive Sciences.

[22]  Jane Hunter,et al.  Post-Publication Peer Review: Opening Up Scientific Conversation , 2012, Front. Comput. Neurosci..

[23]  D. Borsboom,et al.  The poor availability of psychological research data for reanalysis. , 2006, The American psychologist.

[24]  Judit Dobránszki,et al.  Problems with Traditional Science Publishing and Finding a Wider Niche for Post-Publication Peer Review , 2015, Accountability in research.

[25]  J. Bohannon Who's afraid of peer review? , 2013, Science.

[26]  Stephan Lewandowsky,et al.  The Peer Reviewers' Openness Initiative: incentivizing open research practices through peer review , 2016, Royal Society Open Science.