Posts filed under Medical news (341)

October 22, 2015

The wine when it is red

Q: Are you going to have a glass of wine tonight?

A: You mean as a celebration?.

Q: No, because a glass of red wine has the same benefits as a gym session. The Herald story?

A: Yeah, nah.

Q: What part of “Red wine equal to a gym workout – study” don’t you understand?

A: How they got that from the research.

Q: Was this just correlations again?

A: No, it was a real experimental study.

Q: So I’m guessing you’re going to say “in mice”?

A: Effectively. It was in rats.

Q: They gave some rats red wine and made others do gym workouts?

A: No, there wasn’t any red wine.

Q: But the story… ah, I see. “A compound found in red wine”. They gave the rats this compound directly?

A: That’s right

Q: And the gym workouts?

A: Basically, yes. The rats did treadmill runs, though they don’t report that they had headphones on at the time.

Q: So the resveratrol group ended up fitter than the exercise group?

A: No, both groups got the workouts. The resveratrol plus exercise group ended up fitter than the group just getting exercise.

Q: So, really, it’s about a glass of red wine plus a gym workout, not instead of a gym workout? If it was people, not rats?

A: Well, not “a glass”.

Q: How many glasses?

A: The rats got 146mg resveratrol per kg of weight per day. One standard conversion rate is to divide by 7 to get mg/kg in humans: about 20. So for a 60kg person, that’s about 1200mg/day of resveratrol.

Q: How much is in a glass of wine?

A: It depends on the size, but at 5 glasses per bottle, maybe 0.3 mg

Q: So we might need bigger glasses, then.

A: At least you’ll get plenty of exercise lifting them.

October 12, 2015

Elephants and cancer: getting it backwards

One News had a story tonight about elephants. This is how it starts

NZ anchor: An American researcher thinks he may have come up with a new weapon in the fight against cancer, inspired by a trip to the zoo. He remembered that elephants almost never get cancer and wondered whether what protects them could also help us.

US reporter: Elephants have survived 55 million years on this earth. They’ve evolved to beat cancer, and they might just help us beat it too

That’s a nice story, but it’s basically backwards from the more-plausible story in Nature News, and the (open-access) paper in JAMA.

The distinctive feature of elephant blood, according to either version of the story, is that elephants have many more copies of the tumour-suppressor gene p53. This gene makes a key protein in the mechanism that causes cells with DNA damage to kill themselves rather than reproducing and turning into tumours.  A large proportion of tumours have mutations in p53, and people who inherit a damaged copy of the gene tend to develop cancer (including some unusual forms) early in life.  We’ve known about p53 for a long time — decades — so while it is a target for drug development, it isn’t by any means a new target.  We haven’t got far with it because it’s hard to mimic the effect of a protein that acts inside the cell nucleus.

The story in Nature News is that the American researcher, Dr Jordan Schiffman, specialises in treating children with familial cancer, including ones who have inherited mutations in p53 (Li-Fraumeni syndrome). He heard a talk about elephants having many copies of p53. He then went to his local zoo to find out what the cancer rate was in elephants, and confirmed it was low.   This is important;  lots of people will tell you that sharks, for example, don’t get cancer, and that’s just not true.  Elephants, on the other hand, really do seem to have a surprisingly low rate of cancer.

Since elephants have a lot of cells and live a long time, you’d expect them to have a lot of chances to get cancer. Studying elephants makes sense as a way to find completely new ways of treating or preventing cancer. Unfortunately, it seems that a major reason elephants don’t get cancer  is that they have lots of redundant p53 genes, which isn’t a new treatment target. (Other reasons may be that they don’t smoke and they eat vegetarian diets.)

So, while it’s true that elephants have multiple copies of the p53 gene, everything else in the story is basically backwards. Looking for new cancer treatment targets in elephants is a good idea, but that’s isn’t quite what they did. The findings are good news for elephants but they are bad news for us; p53 isn’t a promising new treatment target, it’s one of the oldest ones we have.

August 30, 2015

Genetically targeted cancer treatment

Targeting cancer treatments to specific genetic variants has certainly had successes with common mutations — the most well known example must be Herceptin for an important subset of  breast cancer.  Reasonably affordable genetic sequencing has the potential for finding specific, uncommon mutations in cancers where there isn’t a standard, approved drug.

Most good ideas in medicine don’t work, of course, so it’s important to see if this genetic sequencing really helps, and how much it costs.  Ideally this would be in a randomised trial where patients are randomised to the best standard treatment or to genetically-targeted treatment. What we have so far is a comparison of disease progress for genetically-targeted treatment compared to a matched set of patients from the same clinic in previous years.  Here’s a press release, and two abstracts from a scientific conference.

In 72 out of 243 patients whose disease had progressed despite standard treatment, the researchers found a mutation that suggested the patient would benefit from some drug they wouldn’t normally have got. The median time until these patients starting getting worse again was 23 weeks; in the historical patients it was 12 weeks.

The Boston Globe has an interesting story talking to researchers and a patient (though it gets some of the details wrong).  The patient they interview had melanoma and got a drug approved for melanoma patients but only those with one specific mutation (since that’s where the drug was tested). Presumably, though the story doesn’t say, he had a different mutation in the same gene — that’s where the largest benefit of sequencing is likely to be.

An increase from 12 to 23 weeks isn’t terribly impressive, and it came at a cost of US$32000 — the abstract and press release say there wasn’t a cost increase, but that’s because they looked at cost per week, not total cost.  It’s not nothing, though; it’s probably large enough that a clinical trial makes sense and small enough that a trial is still ethical and feasible.

The Boston Globe story is one of the first products of their new health-and-medicine initiative, called “Stat“. That’s not short for “statistics;” it’s the medical slang meaning “right now”, from the Latin statum.

August 20, 2015

The second-best way to prevent hangovers?

From Stuff: “Korean pears are the best way to prevent hangovers, say scientists.”

This is precisely not what scientists say; in fact, the scientist in question is even quoted (in the last line of the story) as not saying that.

Meanwhile, as a responsible scientist, she reminded that abstaining from excess alcohol consumption is the only certain way to avoid a hangover.

At least Stuff got ‘prevention’ in the headline. Many other sources, such as the Daily Mail, led with claims of a “hangover cure.”  The Mail also illustrated the story with a photo of the wrong species: the research was on the Asian species Pyrus pyrifolia,  rather than the European pear Pyrus communis. CSIRO hopes that European pears are effective, since that’s what Australia has vast quantities of, but they weren’t tested.

What Stuff doesn’t seem to have noticed is that this isn’t a new CSIRO discovery. The blog post certainly doesn’t go out of its way to make that obvious, but right at the bottom, after the cat picture, the puns, and the Q&A with the researcher, you can read

Manny also warns this is only a preliminary scoping study, with the results yet to be finalised. Ultimately, her team hope to deliver a comprehensive review of the scientific literature on pears, pear components and relevant health measures.

That is, the experimental study on Korean pears isn’t new research done at CSIRO. It’s research done in Korea, and published a couple of years ago. There’s nothing wrong with this, though it would have been nice to give credit, and it would have made the choice of Korean pears less mysterious.

The Korean researchers recruited a group of young Korean men, and gave alcohol (in the form of shoju), preceded by either Korean pear juice or placebo pear juice (pear-flavoured sweetened water).  Blood chemistry studies, as well as research in mice by the same group, suggest that the pear juice speeds up the metabolism of alcohol and acetaldehyde. This didn’t prevent hangovers, but it did seem to lead to a small reduction in hangover severity.

The study was really too small to be very convincing. Perhaps more importantly, the alcohol dose was nearly eleven standard drinks (540ml of 20% alcohol) over a short period of time, so you’d hope it was relevant to a fairly small group of people.  Even in Australia.

 

August 6, 2015

Feel the burn

Q: What did you have for lunch?

A: Sichuan-style dry-fried green beans

Q: Because of the health benefits of spicy food?

A: Uh.. no?

Q: “Those who eat spicy foods every day have a 14 per cent lower risk of death than those who eat it less than once a week.” Didn’t you see the story?

A: I think I skipped over it.

Q: So, if my foods is spicy I have a one in seven chance of immortality?

A: No

Q: But 14% lower something? Premature death, like the Herald story says?

A: The open-access research paper says a 14% lower rate of death.

Q: Is that just as good?

A: According to David Spiegelhalter’s approximate conversion formula, that would mean about 1.5 years extra life on average, if it kept being true for your whole life.

Q: Ok. That’s still pretty good, isn’t it?

A: If it’s real.

Q: They had half a million people. It must be pretty reliable, surely?

A: The problem isn’t uncertainty so much as bias: people who eat spicy food might be slightly different in other ways.Having more people doesn’t help much with bias. Maybe there are differences in weight, or physical activity.

Q: Are there? Didn’t they look?

A: Um. Hold on. <reads> Yes, they looked, and no there aren’t. But there could be differences in lots of other things. They didn’t analyse diet in that much detail, and it wouldn’t be hard to get a bias of 14%.

Q: Is there a reason spicy food might really reduce the rate of death?

A: The Herald story says that capsaicin fights obesity, and the Stuff story says bland food makes you overeat

Q: Didn’t you just say that there weren’t weight differences?

A: Yes.

Q: But it could work some other way?

A: It could. Who can tell?

Q: Ok, apart from your correlation and causation hangups, is there any reason I shouldn’t at least use this to feel good about chilis?

A: Well, there’s the fact that the correlation went away in people who regularly drank any alcohol.

Q: Oh. Really?

A: Really. Figure 2 in the paper.

Q: But that’s just correlation, not causation, isn’t it?

A: Now you’re getting the idea.

 

 

August 5, 2015

What does 90% accuracy mean?

There was a lot of coverage yesterday about a potential new test for pancreatic cancer. 3News covered it, as did One News (but I don’t have a link). There’s a detailed report in the Guardian, which starts out:

A simple urine test that could help detect early-stage pancreatic cancer, potentially saving hundreds of lives, has been developed by scientists.

Researchers say they have identified three proteins which give an early warning of the disease, with more than 90% accuracy.

This is progress; pancreatic cancer is one of the diseases where there genuinely is a good prospect that early detection could improve treatment. The 90% accuracy, though, doesn’t mean what you probably think it means.

Here’s a graph showing how the error rate of the test changes with the numerical threshold used for diagnosis (figure 4, panel B, from the research paper)

pancreatic

As you move from left to right the threshold decreases; the test is more sensitive (picks up more of the true cases), but less specific (diagnoses more people who really don’t have cancer). The area under this curve is a simple summary of test accuracy, and that’s where the 90% number came from.  At what the researchers decided was the optimal threshold, the test correctly reported 82% of early-stage pancreatic cancers, but falsely reported a positive result in 11% of healthy subjects.  These figures are from the set of people whose data was used in putting the test together; in a new set of people (“validation dataset”) the error rate was very slightly worse.

The research was done with an approximately equal number of healthy people and people with early-stage pancreatic cancer. They did it that way because that gives the most information about the test for given number of people.  It’s reasonable to hope that the area under the curve, and the sensitivity and specificity of the test will be the same in the general population. Even so, the accuracy (in the non-technical meaning of the word) won’t be.

When you give this test to people in the general population, nearly all of them will not have pancreatic cancer. I don’t have NZ data, but in the UK the current annual rate of new cases goes from 4 people out of 100,000 at age 40 to 100 out of 100,000 people 85+.   The average over all ages is 13 cases per 100,000 people per year.

If 100,000 people are given the test and 13 have early-stage pancreatic cancer, about 10 or 11 of the 13 cases will have positive tests, but so will 11,000 healthy people.  Of those who test positive, 99.9% will not have pancreatic cancer.  This might still be useful, but it’s not what most people would think of as 90% accuracy.

 

August 1, 2015

Ebola vaccine trial

You’ve probably heard that there are positive results from an Ebola vaccine trial (3News, Radio NZ, Stuff, Herald). The stories are all actually good. Here’s the (open-access) research paper

The vaccine was genetically engineered: it’s a live virus for an animal disease that doesn’t spread in humans, modified to produce just one Ebola protein. Having a live virus makes the immune system respond more enthusiastically, but you wouldn’t want to risk a vaccine containing anything even remotely like live Ebola virus. Genetic engineering produces a live virus that contains none of the functional bits of Ebola, so that even if it (improbably ) turned out to be able to spread, it wouldn’t be a big deal.

The basic trial design was to find Ebola cases and vaccinate their contacts and the contacts of their contacts, with randomisation between immediate vaccination and vaccination 21 days later.  The design was a good compromise: the public-health authorities need to know if the vaccine works in order to decide whether it can be used to control future epidemics, but since it probably does little harm, most people would want to be vaccinated.  With this design, everyone the doctors talk to will get the vaccine, either immediately or in three weeks. The design is also cost-effective, since everyone you need to vaccinate is someone the public health system would want to check up on anyway.

In practice, not everyone eligible will be end up being vaccinated: some will refuse, and some will not be contactable. You have to decide how to include the unvaccinated people in the analysis.  In this trial, no-one in the immediate-vaccination group who actually got vaccinated ended up with Ebola, which is what gives the headline 100% success rate. If you compare people who were randomised to immediate vaccination, whether they got it or not, with those who were randomised to delayed vaccination (a more common analysis strategy), the vaccine was still estimated as 75% effective.

There’s still some work to do: when the vaccine is used, it will be important to keep track as far as possible of how well it works. That’s important because we need to know if it’s worth working on a new vaccine, or whether to divert resources to research on treatments for those who are infected, or to other diseases.  For a change, though, this is good news.

 

July 25, 2015

Some evidence-based medicine stories

  • Ben Goldacre has a piece at Buzzfeed, which is nonetheless pretty calm and reasonable, talking about the need for data transparency in clinical trials
  • The Alltrials campaign, which is trying to get regulatory reform to ensure all clinical trials are published, was joined this week by a group of pharmaceutical company investors.  This is only surprising until you think carefully: it’s like reinsurance companies and their interest in global warming — they’d rather the problems would go away, but there’s not profit in just ignoring them.
  • The big potential success story of scanning the genome blindly is a gene called PCSK9: people with a broken version have low cholesterol. Drugs that disable PCSK9 lower cholesterol a lot, but have not (yet) been shown to prevent or postpone heart disease. They’re also roughly 100 times more expensive than the current drugs, and have to be injected. None the less, they will probably go on sale soon.
    A survey of a convenience sample of US cardiologists found that they were hoping to use the drugs in 40% of their patients who have already had a heart attack, and 25% of those who have not yet had one.
July 22, 2015

Are reusable shopping bags deadly?

There’s a research report by two economists arguing that San Francisco’s bag on plastic shopping bags has led to a nearly 50% increase in deaths from foodborne disease, an increase of about 5.5 deaths per year.  I was asked my opinion on Twitter. I don’t believe it.

What the analysis does show is some evidence that emergency room visits for foodborne disease have increased: the researchers analysed admissions for E. coli, Salmonella, and Campylobacter infection, and found an increase in San Francisco but not in neighbouring counties. There’s a statistical issue in that the number of counties is small and the standard error estimates tend to be a bit unreliable in that setting, but that’s not prohibitive. There’s also a statistical issue in that we don’t know which (if any) infections were related to contamination of raw food, but again that’s not prohibitive.

The problem with the analysis of deaths is the definition: the deaths in the analysis were actually all of the ICD10 codes A00-A09. Most of this isn’t foodborne bacterial disease, and a lot of the deaths from foodborne bacterial disease will be in settings where shopping bags are irrelevant. In particular, two important contributors are

  • Clostridium difficile infections after antibiotic use, which has a fairly high mortality rate
  • Diarrhoea in very frail elderly people, in residential aged care or nursing homes.

In the first case, this has nothing to do with food. In the second case, it’s often person-to-person transmission (with norovirus a leading cause), but even if it is from food, the food isn’t carried in reusable shopping bags.

Tomás Aragón with the San Francisco department of Public Health, has a more detailed breakdown of the death data than were available to the researchers. His memo I think is too negative on the statistical issues, but the data underlying the A00-A09 categories are pretty convincing:

aragon

Category A021 is Salmonella (other than typhoid); A048 and A049 are other miscellaneous bacterial infections; A081 and A084 are viral. A090 and A099 are left-over categories that are supposed to exclude foodborne disease but will capture some cases where the mechanism of infection wasn’t known.  A047 is Clostridium difficile.   The apparent signal is in the wrong place. It’s not obvious why the statistical analysis thinks it has found evidence of an effect of the plastic-bag ban, but it is obvious that it hasn’t.

Here, for comparison, are New Zealand mortality data for specific foodborne infections, from foodsafety.govt.nz, the most recent year available

nz

Over the three years, there were only ten deaths where the underlying cause was one of these food-borne illnesses — a lot of people get sick, but very few die.

 

The mortality data don’t invalidate the analysis of hospital admissions, where there’s a lot more information and it is actually about (potentially) foodborne diseases.  More data from other cities — especially ones that are less atypical than San Francisco — would be helpful here, and it’s possible that this is a real effect of reusing bags. The economic analysis,however, relies heavily on the social costs of deaths.

July 16, 2015

Don’t just sit there, do something

The Herald’s story on sitting and cancer is actually not as good as the Daily Mail story it’s edited from. Neither one gives the journal or researchers (the paper is here). Both mention a previous study, but the Mail goes into more useful detail.

The basic finding is

Longer leisure-time spent sitting, after adjustment for physical activity, BMI and other factors, was associated with risk of total cancer in women (RR=1.10, 95% CI 1.04-1.17 for >6 hours vs. <3 hours per day), but not men (RR=1.00, 95% CI 0.96-1.05)

The lack of association in men was a surprise, and strongly suggests that the result for women shouldn’t be believed. It’s also notable that while the estimated associations with a few types of cancer look strong, the lower limits on the confidence intervals don’t look strong:

risk of multiple myeloma (RR=1.65, 95% CI 1.07-2.54), invasive breast cancer (RR=1.10, 95% CI 1.00-1.21), and ovarian cancer (RR=1.43, 95% CI 1.10-1.87).

Since the researchers looked at eighteen subsets of cancer in addition to all types combined, and these are the top three, the real lower limits are even lower.

The stories referred to previous research, published last year, which summarised many previous studies of sitting and cancer risk.  That’s good, but the summary wasn’t entirely accurate. From the Herald:

Previous research by the University of Regensburg in Germany found that spending too much time sitting raised the risk of bowel and lung cancer in both men and women.

In fact, the previous research didn’t look separately at men and women (or, at least, didn’t report doing so). While you would expect similar results in men and women, that study doesn’t address the question.

The Mail does have one apparently good contextual point

However, this previous study – which reviewed 43 other studies – did not find a link between sitting and a higher risk of breast and ovarian cancer. 

But when you look at the actual figures, there’s no real inconsistency between the two studies: they both report weak evidence of higher risk; it’s just a question of whether the lower end of the confidence interval happens to cross the ‘no difference’ line for a particular subset of cancers.

Overall, this is a pretty small risk difference to detect from observational data. If you didn’t already think that long periods of sitting could be bad for you, this wouldn’t be a reason to start.