March 19, 2020

“No evidence” vs “doesn’t work”

One of the problems with reporting of clinical trials is failure to distinguish “we showed this treatment doesn’t work” from “we didn’t show this treatment worked”.  In both cases, what people mean is that their uncertainty interval included “no benefit”, but it matters what other possibilities are in the interval.

A clinical trial in COVID-19 has just reported in the New England Journal of Medicine, testing the HIV drug lopinavir, which showed some lab evidence of being able to block the replication of SARS-CoV-2, the COVID-19 virus.  The great advantage of lopinavir is that we’re already manufacturing and distributing lots of it, in many countries, so if it worked, it could fairly easily be made available.

The trial reported

In hospitalized adult patients with severe Covid-19, no benefit was observed with lopinavir–ritonavir treatment beyond standard care.

It’s natural to interpret this as “it doesn’t work”, but it’s not really accurate.  The mortality results are given as

Mortality at 28 days was similar in the lopinavir–ritonavir group and the standard-care group (19.2% vs. 25.0%; difference, −5.8 percentage points; 95% CI, −17.3 to 5.7)

Now, that’s very weak evidence of a benefit, but the interval goes from a massive 17 percentage point reduction to 6 percentage point increase in mortality.  The data are slightly closer to a 10 percentage point reduction in mortality than to no effect. The trial is just too small to tell us the answer.  Personally, I don’t expect lopinavir to work, based on what I’ve read from medical chemists who don’t think the lab motivation was compelling, but either way this trial isn’t decisive.

In ordinary times a undersized trial like this shouldn’t be done, and while it might still be done, it wouldn’t end up in the world’s highest-impact medical journal.  In the current circumstances there will be a whole bunch of undersized trials being run, and providing really valuable information, and we want them all to be published and combined.  And not overinterpreted.

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »