Posts filed under Medical news (309)

August 30, 2015

Genetically targeted cancer treatment

Targeting cancer treatments to specific genetic variants has certainly had successes with common mutations — the most well known example must be Herceptin for an important subset of  breast cancer.  Reasonably affordable genetic sequencing has the potential for finding specific, uncommon mutations in cancers where there isn’t a standard, approved drug.

Most good ideas in medicine don’t work, of course, so it’s important to see if this genetic sequencing really helps, and how much it costs.  Ideally this would be in a randomised trial where patients are randomised to the best standard treatment or to genetically-targeted treatment. What we have so far is a comparison of disease progress for genetically-targeted treatment compared to a matched set of patients from the same clinic in previous years.  Here’s a press release, and two abstracts from a scientific conference.

In 72 out of 243 patients whose disease had progressed despite standard treatment, the researchers found a mutation that suggested the patient would benefit from some drug they wouldn’t normally have got. The median time until these patients starting getting worse again was 23 weeks; in the historical patients it was 12 weeks.

The Boston Globe has an interesting story talking to researchers and a patient (though it gets some of the details wrong).  The patient they interview had melanoma and got a drug approved for melanoma patients but only those with one specific mutation (since that’s where the drug was tested). Presumably, though the story doesn’t say, he had a different mutation in the same gene — that’s where the largest benefit of sequencing is likely to be.

An increase from 12 to 23 weeks isn’t terribly impressive, and it came at a cost of US$32000 — the abstract and press release say there wasn’t a cost increase, but that’s because they looked at cost per week, not total cost.  It’s not nothing, though; it’s probably large enough that a clinical trial makes sense and small enough that a trial is still ethical and feasible.

The Boston Globe story is one of the first products of their new health-and-medicine initiative, called “Stat“. That’s not short for “statistics;” it’s the medical slang meaning “right now”, from the Latin statum.

August 20, 2015

The second-best way to prevent hangovers?

From Stuff: “Korean pears are the best way to prevent hangovers, say scientists.”

This is precisely not what scientists say; in fact, the scientist in question is even quoted (in the last line of the story) as not saying that.

Meanwhile, as a responsible scientist, she reminded that abstaining from excess alcohol consumption is the only certain way to avoid a hangover.

At least Stuff got ‘prevention’ in the headline. Many other sources, such as the Daily Mail, led with claims of a “hangover cure.”  The Mail also illustrated the story with a photo of the wrong species: the research was on the Asian species Pyrus pyrifolia,  rather than the European pear Pyrus communis. CSIRO hopes that European pears are effective, since that’s what Australia has vast quantities of, but they weren’t tested.

What Stuff doesn’t seem to have noticed is that this isn’t a new CSIRO discovery. The blog post certainly doesn’t go out of its way to make that obvious, but right at the bottom, after the cat picture, the puns, and the Q&A with the researcher, you can read

Manny also warns this is only a preliminary scoping study, with the results yet to be finalised. Ultimately, her team hope to deliver a comprehensive review of the scientific literature on pears, pear components and relevant health measures.

That is, the experimental study on Korean pears isn’t new research done at CSIRO. It’s research done in Korea, and published a couple of years ago. There’s nothing wrong with this, though it would have been nice to give credit, and it would have made the choice of Korean pears less mysterious.

The Korean researchers recruited a group of young Korean men, and gave alcohol (in the form of shoju), preceded by either Korean pear juice or placebo pear juice (pear-flavoured sweetened water).  Blood chemistry studies, as well as research in mice by the same group, suggest that the pear juice speeds up the metabolism of alcohol and acetaldehyde. This didn’t prevent hangovers, but it did seem to lead to a small reduction in hangover severity.

The study was really too small to be very convincing. Perhaps more importantly, the alcohol dose was nearly eleven standard drinks (540ml of 20% alcohol) over a short period of time, so you’d hope it was relevant to a fairly small group of people.  Even in Australia.


August 6, 2015

Feel the burn

Q: What did you have for lunch?

A: Sichuan-style dry-fried green beans

Q: Because of the health benefits of spicy food?

A: Uh.. no?

Q: “Those who eat spicy foods every day have a 14 per cent lower risk of death than those who eat it less than once a week.” Didn’t you see the story?

A: I think I skipped over it.

Q: So, if my foods is spicy I have a one in seven chance of immortality?

A: No

Q: But 14% lower something? Premature death, like the Herald story says?

A: The open-access research paper says a 14% lower rate of death.

Q: Is that just as good?

A: According to David Spiegelhalter’s approximate conversion formula, that would mean about 1.5 years extra life on average, if it kept being true for your whole life.

Q: Ok. That’s still pretty good, isn’t it?

A: If it’s real.

Q: They had half a million people. It must be pretty reliable, surely?

A: The problem isn’t uncertainty so much as bias: people who eat spicy food might be slightly different in other ways.Having more people doesn’t help much with bias. Maybe there are differences in weight, or physical activity.

Q: Are there? Didn’t they look?

A: Um. Hold on. <reads> Yes, they looked, and no there aren’t. But there could be differences in lots of other things. They didn’t analyse diet in that much detail, and it wouldn’t be hard to get a bias of 14%.

Q: Is there a reason spicy food might really reduce the rate of death?

A: The Herald story says that capsaicin fights obesity, and the Stuff story says bland food makes you overeat

Q: Didn’t you just say that there weren’t weight differences?

A: Yes.

Q: But it could work some other way?

A: It could. Who can tell?

Q: Ok, apart from your correlation and causation hangups, is there any reason I shouldn’t at least use this to feel good about chilis?

A: Well, there’s the fact that the correlation went away in people who regularly drank any alcohol.

Q: Oh. Really?

A: Really. Figure 2 in the paper.

Q: But that’s just correlation, not causation, isn’t it?

A: Now you’re getting the idea.



August 5, 2015

What does 90% accuracy mean?

There was a lot of coverage yesterday about a potential new test for pancreatic cancer. 3News covered it, as did One News (but I don’t have a link). There’s a detailed report in the Guardian, which starts out:

A simple urine test that could help detect early-stage pancreatic cancer, potentially saving hundreds of lives, has been developed by scientists.

Researchers say they have identified three proteins which give an early warning of the disease, with more than 90% accuracy.

This is progress; pancreatic cancer is one of the diseases where there genuinely is a good prospect that early detection could improve treatment. The 90% accuracy, though, doesn’t mean what you probably think it means.

Here’s a graph showing how the error rate of the test changes with the numerical threshold used for diagnosis (figure 4, panel B, from the research paper)


As you move from left to right the threshold decreases; the test is more sensitive (picks up more of the true cases), but less specific (diagnoses more people who really don’t have cancer). The area under this curve is a simple summary of test accuracy, and that’s where the 90% number came from.  At what the researchers decided was the optimal threshold, the test correctly reported 82% of early-stage pancreatic cancers, but falsely reported a positive result in 11% of healthy subjects.  These figures are from the set of people whose data was used in putting the test together; in a new set of people (“validation dataset”) the error rate was very slightly worse.

The research was done with an approximately equal number of healthy people and people with early-stage pancreatic cancer. They did it that way because that gives the most information about the test for given number of people.  It’s reasonable to hope that the area under the curve, and the sensitivity and specificity of the test will be the same in the general population. Even so, the accuracy (in the non-technical meaning of the word) won’t be.

When you give this test to people in the general population, nearly all of them will not have pancreatic cancer. I don’t have NZ data, but in the UK the current annual rate of new cases goes from 4 people out of 100,000 at age 40 to 100 out of 100,000 people 85+.   The average over all ages is 13 cases per 100,000 people per year.

If 100,000 people are given the test and 13 have early-stage pancreatic cancer, about 10 or 11 of the 13 cases will have positive tests, but so will 11,000 healthy people.  Of those who test positive, 99.9% will not have pancreatic cancer.  This might still be useful, but it’s not what most people would think of as 90% accuracy.


August 1, 2015

Ebola vaccine trial

You’ve probably heard that there are positive results from an Ebola vaccine trial (3News, Radio NZ, Stuff, Herald). The stories are all actually good. Here’s the (open-access) research paper

The vaccine was genetically engineered: it’s a live virus for an animal disease that doesn’t spread in humans, modified to produce just one Ebola protein. Having a live virus makes the immune system respond more enthusiastically, but you wouldn’t want to risk a vaccine containing anything even remotely like live Ebola virus. Genetic engineering produces a live virus that contains none of the functional bits of Ebola, so that even if it (improbably ) turned out to be able to spread, it wouldn’t be a big deal.

The basic trial design was to find Ebola cases and vaccinate their contacts and the contacts of their contacts, with randomisation between immediate vaccination and vaccination 21 days later.  The design was a good compromise: the public-health authorities need to know if the vaccine works in order to decide whether it can be used to control future epidemics, but since it probably does little harm, most people would want to be vaccinated.  With this design, everyone the doctors talk to will get the vaccine, either immediately or in three weeks. The design is also cost-effective, since everyone you need to vaccinate is someone the public health system would want to check up on anyway.

In practice, not everyone eligible will be end up being vaccinated: some will refuse, and some will not be contactable. You have to decide how to include the unvaccinated people in the analysis.  In this trial, no-one in the immediate-vaccination group who actually got vaccinated ended up with Ebola, which is what gives the headline 100% success rate. If you compare people who were randomised to immediate vaccination, whether they got it or not, with those who were randomised to delayed vaccination (a more common analysis strategy), the vaccine was still estimated as 75% effective.

There’s still some work to do: when the vaccine is used, it will be important to keep track as far as possible of how well it works. That’s important because we need to know if it’s worth working on a new vaccine, or whether to divert resources to research on treatments for those who are infected, or to other diseases.  For a change, though, this is good news.


July 25, 2015

Some evidence-based medicine stories

  • Ben Goldacre has a piece at Buzzfeed, which is nonetheless pretty calm and reasonable, talking about the need for data transparency in clinical trials
  • The Alltrials campaign, which is trying to get regulatory reform to ensure all clinical trials are published, was joined this week by a group of pharmaceutical company investors.  This is only surprising until you think carefully: it’s like reinsurance companies and their interest in global warming — they’d rather the problems would go away, but there’s not profit in just ignoring them.
  • The big potential success story of scanning the genome blindly is a gene called PCSK9: people with a broken version have low cholesterol. Drugs that disable PCSK9 lower cholesterol a lot, but have not (yet) been shown to prevent or postpone heart disease. They’re also roughly 100 times more expensive than the current drugs, and have to be injected. None the less, they will probably go on sale soon.
    A survey of a convenience sample of US cardiologists found that they were hoping to use the drugs in 40% of their patients who have already had a heart attack, and 25% of those who have not yet had one.
July 22, 2015

Are reusable shopping bags deadly?

There’s a research report by two economists arguing that San Francisco’s bag on plastic shopping bags has led to a nearly 50% increase in deaths from foodborne disease, an increase of about 5.5 deaths per year.  I was asked my opinion on Twitter. I don’t believe it.

What the analysis does show is some evidence that emergency room visits for foodborne disease have increased: the researchers analysed admissions for E. coli, Salmonella, and Campylobacter infection, and found an increase in San Francisco but not in neighbouring counties. There’s a statistical issue in that the number of counties is small and the standard error estimates tend to be a bit unreliable in that setting, but that’s not prohibitive. There’s also a statistical issue in that we don’t know which (if any) infections were related to contamination of raw food, but again that’s not prohibitive.

The problem with the analysis of deaths is the definition: the deaths in the analysis were actually all of the ICD10 codes A00-A09. Most of this isn’t foodborne bacterial disease, and a lot of the deaths from foodborne bacterial disease will be in settings where shopping bags are irrelevant. In particular, two important contributors are

  • Clostridium difficile infections after antibiotic use, which has a fairly high mortality rate
  • Diarrhoea in very frail elderly people, in residential aged care or nursing homes.

In the first case, this has nothing to do with food. In the second case, it’s often person-to-person transmission (with norovirus a leading cause), but even if it is from food, the food isn’t carried in reusable shopping bags.

Tomás Aragón with the San Francisco department of Public Health, has a more detailed breakdown of the death data than were available to the researchers. His memo I think is too negative on the statistical issues, but the data underlying the A00-A09 categories are pretty convincing:


Category A021 is Salmonella (other than typhoid); A048 and A049 are other miscellaneous bacterial infections; A081 and A084 are viral. A090 and A099 are left-over categories that are supposed to exclude foodborne disease but will capture some cases where the mechanism of infection wasn’t known.  A047 is Clostridium difficile.   The apparent signal is in the wrong place. It’s not obvious why the statistical analysis thinks it has found evidence of an effect of the plastic-bag ban, but it is obvious that it hasn’t.

Here, for comparison, are New Zealand mortality data for specific foodborne infections, from, the most recent year available


Over the three years, there were only ten deaths where the underlying cause was one of these food-borne illnesses — a lot of people get sick, but very few die.


The mortality data don’t invalidate the analysis of hospital admissions, where there’s a lot more information and it is actually about (potentially) foodborne diseases.  More data from other cities — especially ones that are less atypical than San Francisco — would be helpful here, and it’s possible that this is a real effect of reusing bags. The economic analysis,however, relies heavily on the social costs of deaths.

July 16, 2015

Don’t just sit there, do something

The Herald’s story on sitting and cancer is actually not as good as the Daily Mail story it’s edited from. Neither one gives the journal or researchers (the paper is here). Both mention a previous study, but the Mail goes into more useful detail.

The basic finding is

Longer leisure-time spent sitting, after adjustment for physical activity, BMI and other factors, was associated with risk of total cancer in women (RR=1.10, 95% CI 1.04-1.17 for >6 hours vs. <3 hours per day), but not men (RR=1.00, 95% CI 0.96-1.05)

The lack of association in men was a surprise, and strongly suggests that the result for women shouldn’t be believed. It’s also notable that while the estimated associations with a few types of cancer look strong, the lower limits on the confidence intervals don’t look strong:

risk of multiple myeloma (RR=1.65, 95% CI 1.07-2.54), invasive breast cancer (RR=1.10, 95% CI 1.00-1.21), and ovarian cancer (RR=1.43, 95% CI 1.10-1.87).

Since the researchers looked at eighteen subsets of cancer in addition to all types combined, and these are the top three, the real lower limits are even lower.

The stories referred to previous research, published last year, which summarised many previous studies of sitting and cancer risk.  That’s good, but the summary wasn’t entirely accurate. From the Herald:

Previous research by the University of Regensburg in Germany found that spending too much time sitting raised the risk of bowel and lung cancer in both men and women.

In fact, the previous research didn’t look separately at men and women (or, at least, didn’t report doing so). While you would expect similar results in men and women, that study doesn’t address the question.

The Mail does have one apparently good contextual point

However, this previous study – which reviewed 43 other studies – did not find a link between sitting and a higher risk of breast and ovarian cancer. 

But when you look at the actual figures, there’s no real inconsistency between the two studies: they both report weak evidence of higher risk; it’s just a question of whether the lower end of the confidence interval happens to cross the ‘no difference’ line for a particular subset of cancers.

Overall, this is a pretty small risk difference to detect from observational data. If you didn’t already think that long periods of sitting could be bad for you, this wouldn’t be a reason to start.

July 14, 2015

Another test for Alzheimer’s?

The Herald (from the Telegraph) has a story today about a Google Science Fair contestant, under the headline “Has a 15-year-old found a way to test for Alzheimer’s?“. This is the sort of science story it’s good to see in the papers, but it would be better if it were more accurate.

Krtin Nithiyanandam’s research is impressive even if you ignore the fact that he was only 14. But claiming he

 has developed a “Trojan horse” antibody which can penetrate the brain and attach itself to the toxic proteins present in the disease’s early stages.

is a bit of an exaggeration.

The project write-up describes how he attached antibodies to fluorescent quantum dots. These, cleverly, fluoresce at a near-infrared wavelength which passes through tissue, skin, and bone.  If the project works, it would be possible to screen for Alzheimer’s without even a lumbar puncture.

That’s still ‘if’. Despite what the story says, Krtin hasn’t tested the antibody on any actual brains. Theoretically, it binds to a transporter protein in the right way to penetrate the brain, but it needs testing. It also needs testing for toxicity — if it’s going to be used for screening, it will be injected into large numbers of healthy people, so has to be safe. After all that, it would have to be tested for predictive accuracy: to be useful, the test would have to have a very low false-positive rate. And, on top of that, for testing to really be helpful there would need to be some treatment that showed some sign of actually working. We’re not there yet.

You might also wonder how this relates to the four other early Alzheimer’s tests the Herald has reported on in the past year or so, or the other two proposed by Google Science Fair finalists.  Testing for Alzheimer’s has been an area with a lot of recent research, which is going to be useful if we ever have promising drugs to test.


June 21, 2015

Sunbathing and babies

The Herald (from the Daily Mail)

A sunshine break is the perfect way to unwind, catch up on your reading and top up that tan.

But it seems a week soaking up the rays could also offer a surprising benefit – helping a woman have a baby.

Increased exposure to sunshine could raise the odds of becoming a mother by more than a third, a study suggests.


If you read StatsChat regularly, you probably won’t be surprised to hear the study had nothing to do with either holidays or sunbathing, or fertility in the usual sense.

As the story goes on to say, it was about the weather and IVF success rates. The researchers looked for correlations between a variety of weather measurements and a variety of ways of measuring IVF success. They didn’t find evidence of correlations with the weather at the time of conception. As they said (conference abstract, since this isn’t published)

When looking for a linear correlation between IVF results and the mean monthly values for the weather, the results were inconsistent.

So, following the ‘try, try again’ strategy they looked at weather a month earlier

However, when the same analysis was repeated with the weather results of 1 month earlier, there was a clear trend towards better IVF outcome with higher temperature, less rain and more sunshine hours. 

It helps, here, to know that “a clear trend” is jargon for “unimpresssive statistical evidence, but at least in the direction we wanted”.  That’s not the only problem, though. Since these are honest researchers, you find the other big problem in the section of the abstract labelled “limitations”

Because of the retrospective design of the study, further adjusting for possible confounding factors such as age of the woman, type of infertility and indication for IVF is mandatory. 

That is, their analysis lumped together women of different ages,  types of infertility, and reasons for using IVF, even those these have a much bigger impact on success than is being claimed for the weather.

I don’t have any problem with these analyses being performed and presented to other consenting scientists who are trying to work out ways to improve IVF.  On the other hand,  I’m pretty sure the Daily Mail didn’t get these results by reading the abstract book or sitting through the conference. Someone made a deliberate decision to get publicity for this research, at this stage, in a form where all the cautionary notes would be lost.