The study design elements employed by researchers in preclinical animal experiments from two research domains and implications for automation of systematic reviews

Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.

[1]  Ying Sun,et al.  Reporting Quality of Randomized, Controlled Trials Evaluating Combined Chemoradiotherapy in Nasopharyngeal Carcinoma. , 2017, International journal of radiation oncology, biology, physics.

[2]  Martin Krzywinski,et al.  Points of Significance: Replication , 2014, Nature Methods.

[3]  M. Krzywinski,et al.  Points of Significance: Two-factor designs , 2014, Nature Methods.

[4]  T. Shike,et al.  Animal models. , 2001, Contributions to nephrology.

[5]  OHAT Risk of Bias Rating Tool for Human and Animal Studies , 2015 .

[6]  G. Kelen,et al.  Reporting methodology protocols in three acute care journals. , 1985, Annals of emergency medicine.

[7]  Julie E Goodman,et al.  Systematic comparison of study quality criteria. , 2016, Regulatory toxicology and pharmacology : RTP.

[8]  Nader Shaikh,et al.  A checklist is associated with increased quality of reporting preclinical biomedical research: A systematic review , 2017, PloS one.

[9]  N. Secher,et al.  Contemporary animal models of cardiac arrest: A systematic review. , 2017, Resuscitation.

[10]  N. Secher,et al.  Animal models of cardiac arrest: A systematic review of bias and reporting. , 2018, Resuscitation.

[11]  P. Gluckman,et al.  The nature of the growth pattern and of the metabolic response to fasting in the rat are dependent upon the dietary protein and folic acid intakes of their pregnant dams and post-weaning fat consumption , 2008, British Journal of Nutrition.

[12]  T. Ishrat,et al.  Erratum to: Thioredoxin-Interacting Protein: a Novel Target for Neuroprotection in Experimental Thromboembolic Stroke in Mice , 2014, Molecular Neurobiology.

[13]  A. Fernandez-Bustamante,et al.  Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals , 2018, Anesthesia and analgesia.

[14]  Naomi S. Altman,et al.  Points of significance: Sources of variation , 2014, Nature Methods.

[15]  Jj Allaire,et al.  Web Application Framework for R , 2016 .

[16]  P. Fu,et al.  Systematic review of the renal protective effect of Astragalus membranaceus (root) on diabetic nephropathy in animal models. , 2009, Journal of ethnopharmacology.

[17]  M. Krzywinski,et al.  Points of Significance: Split plot design , 2015, Nature Methods.

[18]  J. Fraser,et al.  Experimental Animal Models of Traumatic Coagulopathy: A Systematic Review , 2015, Shock.

[19]  Robert Moreland,et al.  Methodological Rigor in Preclinical Cardiovascular Studies , 2017, Circulation research.

[20]  X. Chen,et al.  Methodological reporting quality of randomized controlled trials: A survey of seven core journals of orthopaedics from Mainland China over 5 years following the CONSORT statement. , 2016, Orthopaedics & traumatology, surgery & research : OTSR.

[21]  P. Gøtzsche Methodology and overt and hidden bias in reports of 196 double-blind trials of nonsteroidal antiinflammatory drugs in rheumatoid arthritis. , 1989, Controlled clinical trials.

[22]  T. Reyes,et al.  Methyl Donor Supplementation Blocks the Adverse Effects of Maternal High Fat Diet on Offspring Physiology , 2013, PloS one.

[23]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[24]  M. Rovers,et al.  SYRCLE’s risk of bias tool for animal studies , 2014, BMC Medical Research Methodology.

[25]  V. Rahimi-Movaghar,et al.  Animal models of spinal cord injury: a systematic review , 2017, Spinal Cord.

[26]  H. Saltaji,et al.  Randomized clinical trials in dentistry: Risks of bias, risks of random errors, reporting quality, and methodologic quality over the years 1955–2013 , 2017, PloS one.

[27]  Robert Moreland,et al.  Methodological Rigor in Preclinical Cardiovascular Studies: Targets to Enhance Reproducibility and Promote Research Translation , 2017 .

[28]  Elena Cotos,et al.  Rapid Tagging and Reporting for Functional Language Extraction in Scientific Articles , 2017, WOSP@JCDL.

[29]  Naomi S. Altman,et al.  Points of Significance: Nested designs , 2014, Nature Methods.

[30]  Helen W Phipps Systematic Review of Traumatic Brain Injury Animal Models. , 2016, Methods in molecular biology.

[31]  M. Krzywinski,et al.  Points of significance: nested designs. For studies with hierarchical noise sources, use a nested analysis of variance approach. , 2014, Nature methods.

[32]  L. Joosten,et al.  Effects of Specific Multi-Nutrient Enriched Diets on Cerebral Metabolism, Cognition and Neuropathology in AβPPswe-PS1dE9 Mice , 2013, PloS one.

[33]  David W Howells,et al.  Pooling of Animal Experimental Data Reveals Influence of Study Design and Publication Bias , 2004, Stroke.

[34]  Y. Loke,et al.  Methodological review: quality of randomized controlled trials in health literacy , 2016, BMC Health Services Research.

[35]  H. Würbel,et al.  The Researchers’ View of Scientific Rigor—Survey on the Conduct and Reporting of In Vivo Research , 2016, PloS one.

[36]  S. Ananiadou,et al.  Risk of bias reporting in the recent animal focal cerebral ischaemia literature , 2017, Clinical science.