Posts filed under Medical news (341)

January 21, 2016

Mining uncertainty

The FDA collects data on adverse events in people taking any prescription drugs. This information is, as it should be, available for other uses. I’ve been involved in research using it.

The data are also available for less helpful purposes. As Scott Alexander found,  if you ask Google whether basically anything could cause basically anything, there are companies that make sure Google will return some pages reporting that precise association.  And, as he explains, this is serious.

For example, I tried “Adderall” and “plantar fasciitis” as an implausible combination and got 4 hits based on FDA data. And “Accutane” and “plantar fasciitis”, and “Advair” and “plantar fasciitis”, and “acyclovir” and “plantar fasciitis”. Then I got bored.

It’s presumably true that there are people who have been taking Adderall and at the same time have had plantar fasciitis. But given enough patients to work with, that will be true for any combination of drug and side effect. And, in fact, the websites will happily put up a page saying there are no reported cases, but still saying “you are not alone” and suggesting you join their support group.

These websites are bullshit in the sense of philosopher Harry Frankfurt: it is irrelevant to their purpose whether Adderall really causes plantar fasciitis or not. They make their money from the question, not from the answer.

 

(via Keith Ng)

January 19, 2016

Rebooting your immune system?

OneNews had a strange-looking story about multiple sclerosis tonight, with lots of footage of one British guy who’d got much better after treatment, and some mentions of an ongoing trial. With the trial still going on, it wasn’t clear why there was publicity now, or why it mostly involved just one patient.

I Google these things so you don’t have to.

So. It turns out there was a new research paper behind the publicity. There is an international trial of immune stem cell transplant for multiple sclerosis, which plans to follow patients for five years after treatment. The research paper describes what happened for the first three years.

As the OneNews story says, there has been a theory for a long time that if you wipe out someone’s immune system and start over again, the new version wouldn’t attack the nervous system and the disease would be cured. The problem was two-fold. First, wiping out someone’s immune system is an extraordinarily drastic treatment — you give a lethal dose of chemotherapy, and then rescue the patient with a transplanted immune system. Second, it didn’t work reliably.

The researcher behind the current trial believes that the treatment would work reliably if it was done earlier — during one of the characteristic remissions in disease progress, rather than after all else fails. This trial involves 25 patients, and so far the results are reasonably positive, but three years is really to soon to tell whether the benefits are worth the treatment. Even with full follow-up of this uncontrolled study it probably won’t be clear exactly who the treatment is worthwhile for.

Why the one British guy? Well,

The BBC’s Panorama programme was given exclusive access to several patients who have undergone the stem cell transplant.

The news story is clipped from a more in-depth current-affairs programme. That BBC link also shows a slightly worrying paranoid attitude from the lead researcher

He said: “There has been resistance to this in the pharma and academic world. This is not a technology you can patent and we have achieved this without industry backing.”

That might explain pharma, but there’s no real reason for the lack of patents to be a problem for academics. It’s more likely that doctors are reluctant to recommend ultra-high-dose chemotherapy without more concrete evidence. After all, it was supposed to work for breast cancer and didn’t, and it was theorised to work for HIV and doesn’t seem to. And at least in the past it didn’t work reliably for multiple sclerosis.

All in all, I think the OneNews story was too one-sided given the interim nature of the data and lack of availability of the treatment.  It could also have said a bit more about how nasty the treatment is.  I can see it being fine as part of a story in a current affairs programme such as Panorama, but as TV news I think it went too far.

January 18, 2016

Supplement pushing

The Herald has a Daily Mail story about vitamin D for making you generally feel better. It’s not so long ago that the NZ media had a lot of less supportive coverage on vitamin D — Ian Reid, Mark Bolland, and Andrew Grey won the Prime Minister’s Science Prize last year for their work showing that calcium and vitamin D aren’t all they’re cracked up to be.

The story does have some new evidence.

In the study, by a medical team in Edinburgh, volunteers were asked to cycle for 20 minutes. They were then given either a placebo or vitamin D and, two weeks later, were asked to cycle for 20 minutes again.

The buck needs to stop somewhere

From Vox:

Academic press offices are known to overhype their own research. But the University of Maryland recently took this to appalling new heights — trumpeting an incredibly shoddy study on chocolate milk and concussions that happened to benefit a corporate partner.

Press offices get targeted when this sort of thing happens because they are a necessary link in the chain of hype.  On the other hand, unlike journalists and researchers, their job description doesn’t involve being skeptical about research.

For those who haven’t kept up with the story: the research is looking at chocolate milk produced by a sponsor of the study, compared to other sports drinks. The press release is based on preliminary unpublished data. The drink is fat-free, but contains as much sugar as Coca-Cola. And the press release also says

“There is nothing more important than protecting our student-athletes,” said Clayton Wilcox, superintendent of Washington County Public Schools. “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh to all of our athletes.”

which seems to have got ahead of the evidence rather.

This is exactly the sort of story that’s very unlikely to be the press office’s fault. Either the researchers or someone in management at the university must have decided to put out a press release on preliminary data and to push the product to the local school district. Presumably it was the same people who decided to do a press release on preliminary data from an earlier study in May — data that are still unpublished.

In this example the journalists have done fairly well: Google News shows that coverage of the chocolate milk brand is almost entirely negative.  More generally, though, there’s the problem that academics aren’t always responsible for how their research is spun, and as a result they always have an excuse.

A step in the right direction would be to have all research press releases explicitly endorsed by someone. If that person is a responsible member of the research team, you know who to blame. If it’s just a publicist, well, that tells you something too.

January 17, 2016

Not science yet

I’ve written before about the problem of unpublished science in the news: the news story won’t (can’t) give much detail, and there’s no way to find it out.  Stuff has gone one step further:

Recent animal studies show sleep’s cleansing process in action. But now scientists at Oregon Health & Science University are preparing to conduct a study on humans that would further explain deep sleep’s effect on human brains.

The ‘animal studies’ claim comes without any source. Fortunately the Google comes to the rescue and suggests it’s the research in this story from 2013. Interestingly, a couple of subsequent mouse studies have also found brain problems from interrupted sleep — but since each of the three studies found a different problem, with no overlap, this isn’t as supportive as it sounds.

It would matter less if it weren’t for the first sentence of the story

Forget about needing beauty sleep. It’s your brain that may suffer the most from a lack of deep shut eye.

There’s a definite suggestion that this is a risk factor you can do something about, especially as this is in the “Well & Good” section of the Stuff site.

Even if the Oregon research had been carried out and published, it might well not justify that sort of implication. Research that they’re still preparing to do certainly doesn’t.

 

[Update: It’s getting to be a trend: the Herald also has a story about research that hasn’t happened yet, on addiction.]

January 1, 2016

As dangerous as bacon?

From the Herald (from the Telegraph)

Using e-cigarettes is no safer than smoking tobacco with nicotine, scientists warned after finding the vapour damages DNA and could cause cancer.

Smoking tobacco is right up near the top of cancer risks that are easy to acquire, both in terms of how big the risk is and in terms of how strong the evidence is.

[There was some stuff here that was right as to the story in the Herald but wrong about the actual research paper, so I got rid of it. Some of the tests in the research paper used real cigarette smoke, and it was worse but not dramatically worse than the e-cig smoke]

 

The press release is a bit more responsibly written than the story. It describes some of the limitations of the lab tests, and makes it clear that the “no safer than smoking” is an opinion, not a finding. It also gets the journal name right (Oral Oncology) and links to the research paper.

It’s worth quoting the conclusion section from the paper. Here the researchers are writing for other people who understand the issues and whose opinion matters. I’ve deleted one sentence that’s technical stuff basically saying “we saw DNA damage and cell death”

In conclusion, our study strongly suggests that electronic cigarettes are not as safe as their marketing makes them appear to the public. [technical stuff]. Further research is needed to definitively determine the long-term effects of e-cig usage, as well as whether the DNA damage shown in our study as a result of e-cig exposure will lead to mutations that ultimately result in cancer.

That’s very different from the story.

November 20, 2015

Headline inflation

The breakthrough of the decade doesn’t happen most years, and the breakthrough of the year doesn’t happen most weeks, but you still need to put out a health news section.  If you do it by hyping whatever turns up, your headlines end up not having a lot of information value.

So, today, “Blood test for ovarian cancer ‘100% accurate‘” in the Herald is grade inflation.  The researchers at Georgia Tech have some impressive findings, but their test still hasn’t been evaluated on anyone other than the 95 women whose cancer status was known in advance and whose blood was used to develop the test. As the research paper says

…because the disease is in low prevalence in the general population (~0.1% in USA), a screening test must attain a positive predictive value (PPV) of >10%, with a specificity ≥99.6% and a sensitivity ≥75% to be of clinical relevance in the general population

That is, they want the test to give no more than 4 false positives per 1000 healthy women. So far, they’ve only looked at 49 healthy women.

The story is better than the headline on how significant this is, with an independent expert.

Dr Simon Newman, of Target Ovarian Cancer, said: “It is exciting preliminary research. It’s crucial to diagnose ovarian cancer promptly, as up to 90 per cent of women would live for five or more years if diagnosed at the earliest stage.

“However, this highly promising discovery needs significant further development and validation in large clinical trials before we know if it is suitable for screening the general population and works as well as predicted.

Even that’s exaggerated. We just don’t know what the survival would be with early diagnosis. At the moment, you have to be very fortunate to have your ovarian cancer detected at the earliest stage, and these tumours might be very non-representative.  We’ve seen real but smaller-than-expected benefits from screening in other cancers.

There are worse problems with the story than a bit of exaggeration, though. It gets the scientific idea completely wrong, saying:

But when Georgia Institute of Technology researchers looked at the blood of 46 women in the early stages of the disease and that of 49 healthy women, the cancerous samples contained different levels of 16 proteins compared with the healthy ones.

The innovative step in this research was to not use proteins. As the press release says

“People have been looking at proteins for diagnosis of ovarian cancer for a couple of decades, and the results have not been very impressive,”

Instead, the researchers looked at ‘metabolites’, smaller molecules produced by cell processes. Their hypothesis was that tumours might have varying genetic changes and varying proteins, but if they ended up as cancer they would have some cellular processes in common.

 

November 10, 2015

New blood pressure trial

A big randomised trial comparing strategies for treating high blood pressure has just ended early (paper, paywalled).  There’s good coverage in the New York Times, and there will probably be a lot more over the next week. It’s a relatively complicated story.

The main points:

  • Traditionally, doctors try to get your blood pressure below 140mmHg, but some people always thought lower would be better.
  • The study, funded by the US government, randomly allocated over 9000 people with high blood pressure and some other heart disease risk factor (but not diabetes) to either try to get blood pressure of 140mmHg or try to get 120mmHg.
  • A previous trial with the same targets, but in people with diabetes, had been unimpressive: the results slightly favoured more-intensive treatment, but the difference was small, and well within the variation you’d expect by chance.
  • In the new trial blood pressure targeting worked really well: the average blood pressure in the low group was 122mmHg, and in the normal group was 135.
  • Typically, people in the low group took two or three blood pressure medications, those in the normal group typically took one or two — but in both cases with quite a lot of variation.
  • There were 76 fewer ‘primary outcome events’:  heart attack, stroke, heart failure, or death from heart disease in the low BP group, and 55 fewer deaths from any cause.
  • From the beginning, the plan was to stop whenever the difference in number of ‘primary outcome events’ exceeded a specified threshold, unless there was a good reason based on the data to continue. The difference had been just barely over the threshold at the previous analysis, and they continued. In mid-September it was clearly over the threshold, and they stopped.
  • Stopping early will tend to overestimate the benefit, but the fact that they waited for one more analysis reduces this bias.

I’m surprised the benefit from extreme blood pressure reduction is so large (in a relative sense), but even more surprised that they managed to get so many healthy people to take their treatments that consistently for over three years.  As context for this, data from a US national survey in 2011-12 showed only about two-thirds of those currently taking medications for high blood pressure even get down to 140mmHg.

In an absolute sense the risk reduction is relatively small: for every thousand people on intensive blood pressure reduction — healthy people taking multiple pills, multiple times per day — they saw 12 fewer deaths and 16 fewer ‘events’.   On the other hand, the treatments are cheap and most people can find a combination without much in the way of side effects. If intensive treatment becomes standard, there will probably be more use of combination pills to make multiple drugs easier to take.

There’s one moderately worrying factor: a higher rate of kidney impairment in the low BP group (higher by a couple of percentage points). The researchers indicate that they don’t know if this is real, permanent  damage, and that more follow-up and testing of those people is needed. If it is a real problem it could be more serious in ordinary medical practice than in the obsessively-monitored trial.  This may well explain why the trial didn’t stop even earlier:  the monitoring committee would have wanted to be sure the benefits were real given the possibility of adverse effects — the sort of difficult decision that is why you have experienced, independent monitoring committees. 

November 9, 2015

Fish and chips might be bad for you

From the Herald (from the Telegraph)

Martin Grootveld, a professor of bioanalytical chemistry and chemical pathology, said his research showed “a typical meal of fish and chips”, fried in vegetable oil, contained as much as 100 to 200 times more toxic aldehydes than the safe daily limit set by the World Health Organisation.

In contrast, heating up butter, olive oil and lard in tests produced much lower levels of aldehydes. Coconut oil produced the lowest levels of the harmful chemicals.

 

That’s in the lab. In July, Professor Grootveld reported the same type of analysis for a BBC program, but on oil as actually used by home cooks. From the press release at De Montfort University

Professor Grootveld’s team found sunflower oil and corn oil produced aldehydes at levels 20 times higher than recommended by the World Health Organisation. 

Olive oil and rapeseed oil produced far fewer aldehydes as did butter and goose fat.

So, about an order of magnitude less bad than the current story.

The story talks about turning current food advice on its head. The most… the two most… among the several most important things wrong with that claim are: first, that oils high in monounsaturated fats (such as olive oil and rapeseed/canola) are the current food advice; second, that the advice to eat less saturated fat is based on studies of actual disease, not just on lab biochemistry;  third, Prof Grootveld published research on this lipid oxidation phenomenon in 1998, so his reported surprise at the findings is a bit strange; and fourth, “a typical meal of fish and chips” hasn’t been regarded as health food since basically forever.

 

 

November 5, 2015

New source for medical/science news

STAT (statnews.com) is not, sadly, a statistics news site. On the upside, it’s a very promising site covering medicine and medical science.  It’s owned by the same person as the Boston Globe, but is a separate venture.

Their front page has a lot of news items already; for a look at the sort of more detailed story they can handle, there’s one on whether antioxidants have positive or negative effects on cancer. That’s by Sharon Begley, a highly-regarded and award-winning science writer.

STAT has recruited, either as staff or columnists, a lot of impressive people. It’s definitely worth looking at.