Commentary on Engen et al: Risk-based, dynamic, process-oriented monitoring strategies and their burden

The International Network for Strategic Initiatives in Global HIV Trials (INSIGHT) Strategic Timing of Antiretroviral Treatment (START) Study Group should be commended on the execution and completion of their trial, published in this issue of Clinical Trials. Given the high costs of monitoring and its key role in protecting trial integrity and patient safety, it is essential to rigorously evaluate specific monitoring techniques to guide our decisions for future trials. The INSIGHT START trial serves this important function. In this commentary, we summarize the key results from Engen et al. and address four related topics: monitoring strategies proportional to the risk, dynamic centralized monitoring, documenting processes, and the costs and burdens of centralized monitoring. INSIGHT START is an ancillary cluster-randomized trial that was added to START, an international randomized treatment trial for persons with HIV. The INSIGHT START monitoring substudy compared annual on-site monitoring versus no on-site monitoring with a primary composite outcome of trial errors including eligibility errors, consent violations, use of antiretroviral agents inconsistent with the protocol, late reporting of START clinical endpoints, data alteration, and fraud. Intensive central and local monitoring was carried out at all sites, with local monitoring results reported centrally twice each year. Engen et al. found that the primary monitoring outcome rate was higher with annual on-site monitoring as compared to no on-site monitoring (odds ratio (OR) = 1.7, 95% confidence interval (CI): 1.1– 2.7). More violations of all types were detected in the onsite monitoring group. Minor informed consent violations (such as wrong version used) and reporting of serious adverse events outside of the 6-month window were the most common violations, while eligibility violations were the most imbalanced between the two groups. No data alteration or fraud was detected. The authors concluded that the detected errors had minimal impact on the results of the START trial with respect to bias. The intensity of a monitoring strategy should be proportional to the risk to the patients and data quality in the study and should adapt over time. It was not clear how this study determined the comprehensiveness of the initial centralized monitoring. Was a risk assessment done to identify the key vulnerabilities that have the largest impact along with how to monitor them? Moreover, was this assessment revisited and updated as new vulnerabilities were identified? Monitoring strategies often have escalation (de-escalation) procedures if a given site is exhibiting higher (lower) threats to data quality and integrity. For example, if a site demonstrates vulnerabilities, then escalation from centralized monitoring to remote monitoring to triggered on-site monitoring may be needed. Alternatively, increased monitoring could be performed when a site starts up, and once the site has proven its knowledge of the protocol to the coordinating centers, then it can be put on a less intense monitoring level. As new information identifies potential vulnerabilities, the centralized monitoring needs to adapt. Using a specific example from this trial, it appears that many of the detected monitoring issues originated from one coordinating center. It is of interest to know how many sites this center oversaw and whether the errors were clustered within a few sites. More importantly, did any increased monitoring occur based on that finding and were any conclusions drawn from these errors? Potential interventions could have included a root cause analysis, remote monitoring visits, increasing the frequency of the local monitoring, and/or a coordinating center review. As another example, the study reported that the odds of detecting an error was higher with on-