Network meta‐analysis of disconnected networks: How dangerous are random baseline treatment effects?

In network meta-analysis, the use of fixed baseline treatment effects (a priori independent) in a contrast-based approach is regularly preferred to the use of random baseline treatment effects (a priori dependent). That is because, often, there is not a need to model baseline treatment effects, which carry the risk of model misspecification. However, in disconnected networks, fixed baseline treatment effects do not work (unless extra assumptions are made), as there is not enough information in the data to update the prior distribution on the contrasts between disconnected treatments. In this paper, we investigate to what extent the use of random baseline treatment effects is dangerous in disconnected networks. We take 2 publicly available datasets of connected networks and disconnect them in multiple ways. We then compare the results of treatment comparisons obtained from a Bayesian contrast-based analysis of each disconnected network using random normally distributed and exchangeable baseline treatment effects to those obtained from a Bayesian contrast-based analysis of their initial connected network using fixed baseline treatment effects. For the 2 datasets considered, we found that the use of random baseline treatment effects in disconnected networks was appropriate. Because those datasets were not cherry-picked, there should be other disconnected networks that would benefit from being analyzed using random baseline treatment effects. However, there is also a risk for the normality and exchangeability assumption to be inappropriate in other datasets even though we have not observed this situation in our case study. We provide code, so other datasets can be investigated.

[1]  G. Lu,et al.  Combination of direct and indirect evidence in mixed treatment comparisons , 2004, Statistics in medicine.

[2]  S M Goring,et al.  Disconnected by design: analytic approach in treatment networks having no common comparator , 2016, Research synthesis methods.

[3]  A. E. Ades,et al.  Absolute or relative effects? Arm‐based synthesis of trial data , 2015, Research synthesis methods.

[4]  J. Brooks Why most published research findings are false: Ioannidis JP, Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece , 2008 .

[5]  Alex J. Sutton,et al.  Evidence Synthesis for Decision Making 5: The Baseline Natural History Model. (5), , 2013 .

[6]  Dan Jackson,et al.  The design‐by‐treatment interaction model: a unifying framework for modelling loop inconsistency in network meta‐analysis , 2015, Research synthesis methods.

[7]  Derek J. Pike,et al.  Empirical Model‐building and Response Surfaces. , 1988 .

[8]  Bradley P Carlin,et al.  Rejoinder to the discussion of “a Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons,” by S. Dias and A. E. Ades , 2016, Research synthesis methods.

[9]  Alex J. Sutton,et al.  Evidence Synthesis for Decision Making 1: Introduction. Medical Decision Making, , 2013 .

[10]  Martyn Plummer,et al.  JAGS: Just Another Gibbs Sampler , 2012 .

[11]  Alex J. Sutton,et al.  Evidence Synthesis for Decision Making 2 , 2013, Medical decision making : an international journal of the Society for Medical Decision Making.

[12]  Lifeng Lin,et al.  Performing Arm-Based Network Meta-Analysis in R with the pcnetmeta Package. , 2017, Journal of statistical software.

[13]  Sharon L. Lohr,et al.  Sampling: Design and Analysis , 1999 .

[14]  J. Ioannidis Contradicted and Initially Stronger Effects in Highly Cited Clinical Research , 2005 .

[15]  Bradley P Carlin,et al.  A Bayesian missing data framework for generalized multiple outcome mixed treatment comparisons , 2016, Research synthesis methods.