May 15, 2020

Test accuracy

There’s a new COVID case from the Marist College cluster today.  The person previously tested negative, but had been in isolation (perhaps, though the Herald doesn’t say, because of the combination of symptoms and being a contact).  Now that we have plenty of testing capacity there has been follow-up testing of some clusters as well as testing of some apparently healthy people in high-risk jobs.

From what I’ve seen on social media, this has led some more people to find out about the false negative rates of the current tests.  It’s not a secret that the swab+PCR test we use in NZ misses maybe a third of infections (because there isn’t enough virus on the swab), though it hasn’t exactly been emphasised.  So, how is this acceptable? Well, “acceptable” depends on the alternative. It’s the best test we have. Researchers (and companies) are working on better ones, and things are likely to improve over time, as they did with HIV testing.  If you’ve been in contact with a case and have COVID-like symptoms and test negative, you’re still going to need to isolate until you recover.  That, plus the fact that the testing does pick up the majority of cases, means a test/trace/isolate strategy, done right, should be nearly enough to control an outbreak that’s caught early.

The current tests give basically no false positives. That’s really helpful for a test/trace/isolate strategy — we’ve done 200,000 tests, and if, say, 5% of them were false positives, that would be another 10,000 cases. Before counting all the contacts of those 10,000 people.  The low false positive rate also means the health system can say, “yes, you need to get tested”,  and then after a positive results, “yes, you absolutely must stay home”, “yes, you need to tell us about all the places you’ve been, even if some of them are embarrassing or illegal”.

There’s another testing-accuracy story in the New York Times, unhelpfully headlined Coronavirus Testing Used by the White House Could Miss Infections. It turns out that they don’t mean that it could miss infections the way all the other tests do; they mean it could miss infections that other tests detect.  The test in question is a portable testing machine from Abbott that takes only 5 minutes to process a sample, quite a bit faster than the standard testing systems.  Researchers from a testing lab at New York University’s Langone Medical Center (who liked the idea of a faster test, given the number of tests they perform) did comparisons to the machines they are currently using and published a PDF about it– some tests on the same swabs and some on different swabs taken at the same time from the patient.  They say the Abbott machine missed 1/3 (out of 15) or 1/2 (out of 30) samples where the current machine found the virus.

Abbott, on the other hand, said their evaluations showed 0.02% false negative rate.

You might wonder how two evaluations could be so different: one false negative in 50,000 samples vs 5 in 15 samples?  As usual, when two numbers don’t fit, it’s probably because they don’t mean the same. Abbott will have been referring to an estimated false negative rate in viral samples with a known virus concentration — an evaluation of the assay itself.  The NYU researchers are talking about live clinical use in samples where they know the virus is present in the swab.  And when we talk about false negatives in the NZ sampling system, we’re talking about samples where the virus probably isn’t present in the swab.

Abbott argue that the NYU researchers were using the machine incorrectly. That could be true, but it’s only reassuring to the extent that you think the White House will be better at it than a pretty highly regarded New York hospital and research centre.

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    Behzad Kianian

    “As usual, when two numbers don’t fit, it’s probably because they don’t mean the same. Abbott will have been referring to an estimated false negative rate in viral samples with a known virus concentration — an evaluation of the assay itself. The NYU researchers are talking about live clinical use in samples where they know the virus is present in the swab.”

    I’m having a very hard time understanding why these mean something different — probably due to my own ignorance about testing and biology. In both cases, they are estimating false negatives among cases where the virus is present. What am I missing? Is it that the Abbot testing was based on very high virus concentrations in their samples, whereas the swabs could have some low or high concentration? Apologies if I am missing something obvious.

    4 years ago

    • avatar
      Thomas Lumley

      That’s a good point. There are different because the Abbott claim is (presumably) under standardised conditions where there some specified concentration of viral RNA in the liquid that they assay, and the NYU claim is about swabs in real clinical use that lead to positive tests on their other machine. Abbott’s claim about mis-use of their machine seems to be that the step of getting from virus on the swab to RNA in the liquid was done wrong.

      4 years ago