The COVID-19 pandemic is forcing researchers, clinicians, and policymakers to accelerate the evaluation of treatments and vaccines. Critical to these evaluations is the ability to characterize the uncertainty of inferences in clear terms accessible to a broad set of stakeholders with varying statistical backgrounds. Here we quantify the uncertainty about inferences from Randomized Controlled Trials (RCTs) by quantifying how many patients would have to have experienced different outcomes to change the inference. For example, the inference of a positive effect of Hydroxychloroquine (HCQ) on pneumonia from an open label RCT would be overturned if one of the treatment cases characterized as improved had instead been characterized as unchanged or exacerbated. We generalize the technique to apply to thresholds defined by any effect size. We also apply the analysis to an inference of no effect of Remdesivir on mortality and an historical example of anti-hypertensive treatments on stroke. Quantifying the robustness of inferences in terms of patient outcomes supports a more precise dialogue among clinicians, researchers, policymakers, and the general public.