December 9, 2018

Is 90% accuracy a lot?

There’s a headline in the Guardian Scientists develop 10-minute universal cancer test. As you’d probably expect by now, that’s overstating things quite a bit. Let’s see what we can find.

The Guardian story doesn’t link to the open-access research paper, but there’s a piece by the researchers themselves in The Conversation. It doesn’t link, either. However, Google finds the science news website phys.org, and it does link.

The idea is that the C of the DNA A, C, T, G bases exists in two versions (C and Ç, say). The modified (methylated) version  is involved in turning genes off, so successful tumours have often managed to get rid of the modifications near genes important for cell growth.  The clever idea is that the changes in methylation can affect how the DNA sticks to itself and to other things — such as gold nanoparticles, where it’s detectable because it changes the colour of the particles in solution.

The research paper shows how this sort of science works. There’s a lot of effort put into measuring how DNA sticks to things, and then showing that it’s really methylation that the test is measuring, including tests with DNA that’s had methylation added or removed artificially.   Then, there are tests with DNA extracted from a selection of real tumours and non-tumours, seeing how well the decisions correlate with cancer. All of this is important as a foundation for the science.

Finally, there’s the data that is directly related to testing for cancer: running the test on DNA found floating free in the blood of people with and without diagnosed cancer

Statistical diagnostic efficacy test at cutoff value %ir = 35.7 shows that our method has high accuracy (83.45%) with high positive and negative predictive values (Table-Fig. 3d, PPV = 91.30%, NPV = 69.81%, see more details at Supplementary Table 3). 

The positive way to put this is that it’s pretty impressive for a first effort, and that optimising the test might make it really useful.  The less positive way to put it is that the positive predictive value of 91.3% (meaning that 91.3% of the people who test positive actually have cancer) happened when two-thirds of all the people tested had cancer.

In that example, roughly one in four of the people without cancer tested positive. Suppose instead you’re using it for screening and that 1 in 100 people really have detectable cancer.   You probably pick up that one person, but you also pick up about 25 people without cancer. And since the other attribute of the test is that it’s supposed to be sensitive to any type of cancer anywhere in the body, you’re going to need to do a lot of further investigation to reassure those 25 people.

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    Steve Curtis

    The Science editor of the Guardian, whos byline is on this story has a PhD in Biomedical Materials and previous journalist experience at New Scientist and a Journal editor at the Institute of Physics. All heavyweight stuff. He does say the journal they have published is Nature Communications, which you found by other means.
    With a title like this: “Epigenetically reprogrammed methylation landscape drives the DNA self-assembly and serves as a universal cancer biomarker” , I hate to think where the Daily Mail would have gone with ‘DNA self- assembly’.
    However its epigenetics which will likely be a word we hear a lot more of in future-well worth looking it up.

    5 years ago

  • avatar
    Antonio Rinaldi

    Speaking of methylation, have you already seen this paper?
    https://twitter.com/cesifoti/status/1074627464373121026
    P.S.: I don’t like such type of endorsemenet at all.

    5 years ago