Posts filed under Medical news (341)

October 2, 2018

Pharmac rebates

There’s an ‘interactive’ at Stuff about the drug rebates that Pharmac negotiates. The most obvious issue with it is the graphics, for example

and

The first of these is a really dramatic illustration of a well-known way graphs can mislead: using just one dimension of a two-dimensional or three-dimensional thing to represent a number. The 2016/7 capsule looks much more than twice as big as the puny little 2014/15 one, because it’s twice as high and twice as wide (and by implication from shading, twice as deep).  The first graph also commits the accounting sin of displaying a trend from total, nominal expenditures rather than real (ie, inflation-adjusted) per-capita expenditures.

The second one is not as bad, but the descending line to the left of the data points is a bit dodgy, as is the fact that the x-axis is different from the first graph even though the information should all be available.  Also, given that rebates are precisely not a component of Pharmac’s drug spend, the percentage is a bit ambiguous.  The graph shows total rebates divided by what would have been Pharmac’s “drug spend” in the improbable scenario that the same drugs had been bought without rebates. That is, in the most recent year, Pharmac spent $849 million on drugs. If rebates were $400m as shown in the first graph, the percentage in the second graph is something like ($400 million)/($400 million+$849 million)=32%.

More striking when you listen to the whole thing, though,  is how negative it is about New Zealand getting these non-public discounts on expensive drugs.  In particular, the primary issue raised is whether we’re getting better or worse discounts than other countries (which, indeed, we don’t know), rather than whether we’re getting good value for what we pay — which we basically do know, because that’s exactly what Pharmac assesses.  

Now, since the drug companies do want to keep their prices secret there must be some financial advantage to them in doing so, thus there is probably some financial disadvantage to someone other than them.   It’s possible that we’re in that group; that other comparable countries are getting better prices than we are. It’s also possible that we’re getting better prices than them.  Given Pharmac’s relatively small budget and their demonstrated and unusual willingness not to subsidise overpriced new drugs, I know which way I’d guess.

There are two refreshing aspects to the interactive, though.  First, it’s good to see explicit consideration of the fact that drug prices are primarily not a rich-country problem.   Second, it’s good to see something in the NZ mass media in favour of the principle that Pharmac can and should walk away from bad offers. That’s a definite change from most coverage of new miracle drugs and Pharmac.

February 24, 2018

Scare stories: a pain in the neck

From the Herald, from the Daily Mail, on the dangers of painkillers

Researchers have today revealed the exact risk of having a heart attack or stroke from taking several common painkillers.

They discovered, on average, one in 330 adults who have been taking ibuprofen will experience a heart attack or stroke within four weeks.

However, the drug, costing as little as 20c a tablet and available in supermarkets and dairies, was found to be three times less dangerous than celecoxib, which will lead to one in 105 adults experiencing a heart attack or stroke.

Now, that’s obviously not true for people just taking ibuprofen for an injury or a headache. So what’s the true story?

The research paper is here. As the story says, it followed up 56,000 people in Taiwan with high blood pressure.  They were interested in a group of painkillers called “COX-selective” that have a lower risk of causing ulcers and stomach bleeding, but potentially a higher risk of heart attack and stroke.  One familiar COX-selective painkiller in NZ is Voltaren, familiar non-selective ones are ibuprofen and naproxen — but the study wasn’t looking at over-the-counter medications bought in supermarkets and dairies, just at people starting prescriptions.

Over the 7927 people starting prescriptions for ibuprofen, 24 ended up getting a heart attack or stroke, after an average of two weeks’ treatment. Of the  1,779 starting celecoxib prescriptions, 17 ended up getting a heart attack or stroke, after an average of about three weeks’ treatment.  Overall, there was a bit more than one heart attack per ten people per year for those prescribed COX-selective drugs and a bit less than one heart attack per ten people per year for those prescribed non-selective drugs.  And there’s no comparison with people who weren’t taking painkillers

You might wonder how numbers like 24 and 17 are large enough to say anything reliable. They aren’t. The “exact risk” of 1 in 330 from the lead is actually a range from something like 1 in 200 to 1 in 500, even before you consider the uncertainties in generalising from middle-aged to elderly Taiwanese people with hypertension to other groups.

This study on its own provides only very weak evidence that COX-selective drugs are more dangerous. The conclusion is plausible for all sorts of reasons, but it’s hardly conclusive.  Like it says on the packet, don’t take any of these medications for weeks at a time without consulting a more reliable source than the Daily Mail.

January 10, 2018

Complete balls

The UK’s Metro magazine has a dramatic story under the headline Popping ibuprofen could make your balls shrivel up

Got a pounding headache?

You might just want to give a big glass of water and a nap a go before reaching for the painkillers. Scientists warn that ibuprofen could be wrecking men’s fertility by making their balls shrivel up.

Sounds pleasant.

Fortunately, that’s not what the study showed.

The story goes on

Researchers looked at 31 male participants and found that taking ibuprofen reduced production of testosterone by nearly a quarter in the space of around six weeks.

That’s also almost completely untrue. In fact, the research paper says (emphasis added)

We investigated the levels of total testosterone and its direct downstream metabolic product, 17β-estradiol. Administration of ibuprofen did not result in any significant changes in the levels of these two steroid hormones after 14 d or at the last day of administration at 44 d. The levels of free testosterone were subsequently analyzed by using the SHBG levels. Neither free testosterone nor SHBG levels were affected by ibuprofen.

Stuff has a much better take on this one:

Men who take ibuprofen for longer than the bottle advises could be risking their fertility, according to a new study.

Researchers found that men who took ibuprofen for extended periods had developed a condition normally seen in elderly men and smokers that, over time, can lead to fertility problems

Ars Technica has the more accurately boring headline Small study suggests ibuprofen alters testosterone metabolism.

The study involved 14 men taking the equivalent of six tablets a day of ibuprofen for six weeks (plus a control group). Their testosterone levels didn’t change, but the interesting research finding is that this was due to compensation for what would otherwise have been a decrease. That is, a hormone signalling to increase testosterone production was elevated.  There’s a potential risk that if the men kept taking ibuprofen at this level for long enough, the compensation process might give up. And that would potentially lead to fertility problems — though not (I don’t think) to the problems Metro was worried about.

So, taking ibuprofen for months on end without a good reason? Probably inadvisable. Like it says on the pack.

 

November 23, 2017

More complicated than that

Science Daily

Computerized brain-training is now the first intervention of any kind to reduce the risk of dementia among older adults.

Daily Telegraph

Pensioners can reduce their risk of dementia by nearly a third by playing a computer brain training game similar to a driving hazard perception test, a new study suggests.

Ars Technica

Speed of processing training turned out to be the big winner. After ten years, participants in this group—and only this group—had reduced rates of dementia compared to the controls

The research paper is here, and the abstract does indeed say “Speed training resulted in reduced risk of dementia compared to control, but memory and reasoning training did not”

They’re overselling it a bit. First, these are intervals showing the ratios of number of cases with and without the three types of treatment, including the uncertainty

dementia

Summarising this as “speed training works but the other two don’t” is misleading.  There’s pretty marginal evidence that speed training is beneficial and even less evidence that it’s better than the other two.

On top of that, the results are for less than half the originally-enrolled participants, the ‘dementia’ they’re measuring isn’t a standard clinical definition, and this is a study whose 10-year follow-up ended in 2010 and that had a lot of ‘primary outcomes’ it was looking for — which didn’t include the one in this paper.

The study originally expected to see positive results after two years. It didn’t. Again, after five years, the study reported “Cognitive training did not affect rates of incident dementia after 5 years of follow-up.”  Ten-year results reported in 2014, showed relatively modest differences in people’s ability to take care of themselves, as Hilda Bastian commented.

So. This specific type of brain training might actually help. Or one of the other sorts of brain training they tried might help. Or, quite possibly, none of them might help.  On the other hand, these are relatively unlikely to be harmful, and maybe someone will produce an inexpensive app or something.

September 30, 2017

Simple and ineffective

Q: Did you see there’s a new test to predict dementia?

A: Another one?

Q: Yes, the Herald says it  “would allow drugs and lifestyle changes, such as a healthy diet and more exercise, to be more effective before the devastating condition takes hold.

A: That would make more sense if there were drugs and lifestyle changes that actually worked to stop the disease process.

Q: At least it’s a simple one and accurate test. It’s just based on your sense of smell.

A: <dubious noises>

Q: But  “almost all the participants, aged 57 to 85, who were unable to name a single scent had been diagnosed with dementia. And nearly 80 per cent of those who provided just one or two correct answers also had it,

A: That’ s not what the research says

Q: It’s what the story says.

A: Yes. Yes, it is.

Q: Ok, what does the research say? It’s behind a paywall

A: Here’s a graph
scents

Q: That matches the story, doesn’t it?

A: Check the axis labels.

Q: Oh. 8% and 10%? But couldn’t the labels just be wrong?

A: Rather than the Daily Mail? It’s possible, but the research paper also says “9% positive predictive value”, meaning that only 9% of those who are predicted to get dementia actually do, and that matches the graph.

Q: Um

A: And there’s a commentary in the same issue of the journal, headlined  Screening Is Not Benign and saying “No test with such a low [positive predictive value] would be taken seriously as a way to identify any disease in a population”

Q: But it’s still a big difference, isn’t it.

A: Yes, and it’s scientifically interesting that the nerves or brain cells related to smell seem to be damaged relatively early in the disease, but it’s not a predictive test.

 

[Update: the source for the error seems to be the University of Chicago press release.]

[Update: It’s on Stuff, too]

September 18, 2017

Another Alzheimer’s test

There’s a new Herald story with the lead

Artificial intelligence (AI) can identify Alzheimer’s disease 10 years before doctors can discover the symptoms, according to new research.

The story doesn’t link (even to the Daily Mail). Before we get to that, regular StatsChat readers will have some idea of what to expect.

Early diagnosis for Alzheimer’s is potentially useful when designing clinical trials for new treatments, and eventually will be useful for early treatment (when we get treatments that work).  But not yet.  It’s also not as much of a novelty as the story suggests. Candidate tests for early diagnosis are appearing all over the place (here’s seven of them).

Second, you’d expect that the accuracy of the test and its degree of foresight to have been exaggerated — and the story confirms this.

Following the training, the AI was then asked to process brains from 148 subjects – 52 were healthy, 48 had Alzheimer’s disease and 48 had mild cognitive impairment (MCI) but were known to have developed Alzheimer’s disease two and a half to nine years later.

That is, the early diagnosis wasn’t of people without symptoms, it was of people whose symptoms had led to a diagnosis but didn’t amount to dementia

The Herald doesn’t link, but Google finds a story at New Scientist, and they do link. The link is to the arXiv preprint server. That’s unusual: normally this sort of story is either complete vapour or is based on an article in a research journal.  This one is neither: it’s a real scientific report, but one that hasn’t yet been published — it’s probably undergoing peer review at the moment.

Anyway, the preprint is enough to look up the accuracy of the test. The sensitivity was high: nearly all Alzheimer’s cases and cases of Mild Cognitive Impairement were picked up. The specificity was terrible: more than 1/4 of people tested would receive a false positive diagnosis.

It’s possible that this test can be re-tuned into a genuinely useful clinical tool. As published, though, it isn’t even close.

August 21, 2017

Effective treatment is effective

There’s a story in New Scientist, and in the NY Daily News, based on this research paper, saying that choosing alternative treatment instead of conventional treatment for cancer is bad for you.

The research is well done: they looked at the most common cancers in the US and found a small set of people who turned down all conventional treatment in favour of ‘alternative’ medicine.  They matched these people on cancer type, age, clinical group stage, what other disease they had, insurance type, race, and year of diagnosis, to a set who did get conventional treatment.   Even after all that matching, there was a big difference in survival.

There are two caveats to the story. First, this is people who turned down all conventional treatment, even surgery. That’s rare. In the database they used, 99.98% of patients received some conventional treatment. It’s much more common for people to receive some or all of the recommended conventional treatment, plus other things — not ‘alternative’ but ‘complementary’ or ‘integrative’ medicine.

Second, the numbers are being misinterpreted.  For example, New Scientist says

Among those with breast cancer, people taking alternative remedies were 5.68 times more likely to die within five years.

The actual figures were 42% and 13%, so about 3.1 times more likely. Here’s the graph
breastcancer

Similarly, the New Scientist story says

They found that people who took alternative medicine were two and half times more likely to die within five years of diagnosis.

The actual figures were 45% and 26%; 1.75 times more likely.

What’s happening is a confusion of rate ratios and actual risks of death; these aren’t the same.  The rate (or hazard) is measured in % per year; the risk is measured in %.  The risk is capped at 100%; the rate doesn’t have an upper limit.   Because of the cap at 100%, risk ratios are mathematically less convenient to model than rate ratios. As a tradeoff, it’s harder to explain your results using rate ratios. The Yale publicity punted on the issue, not mentioning the numbers and leaving reporters to get it wrong.  When this happens, it’s the scientists’ fault, not the reporters’.

 

June 14, 2017

Comparing sources

The Herald has a front-page-link “Daily aspirin deadlier than we thought”, for a real headline “Daily aspirin behind more than 3000 deaths a year, study suggests”.  The story (from the Daily Telegraph) begins

Taking a daily aspirin is far more dangerous than was thought, causing more than 3000 deaths a year, a major study suggests.

Millions of pensioners should reconsider taking pills which are taken by almost half of elderly people to ward off heart attacks and strokes, researchers said.

The study by Oxford University found that those over the age of 75 who take the blood-thinning pills are 10 times more likely than younger patients to suffer disabling or fatal bleeds.

The BBC also has a report on this research. Their headline is Aspirin ‘major bleed’ warning for over-75s, and the story starts

People over 75 taking daily aspirin after a stroke or heart attack are at higher risk of major – and sometimes fatal – stomach bleeds than previously thought, research in the Lancet shows.

Scientists say that, to reduce these risks, older people should also take stomach-protecting PPI pills.

But they insist aspirin has important benefits – such as preventing heart attacks – that outweigh the risks.

The basic message from the same underlying research seems very different. Sadly, neither story links to the open-access research paper, which has very good sections on the background to the research and what this new study added.

Basically, we know that aspirin reduces blood clotting.  This has good effects — reducing the risk of heart attacks and strokes — and also bad effects — increasing the risk of bleeding.   We do randomised trials to find out whether the benefits exceed the risks, and in the randomised trials they did for aspirin. However, the randomised trials were mostly in people under 75.

The new study looks at older people, but it wasn’t a randomised trial: everyone in the study was taking aspirin, and there was no control group.  The main comparisons were by age. Serious stomach bleeding was a lot more common in the oldest people in the study, so unless the beneficial effects of aspirin were also larger in these people, the tradeoff might no longer be favourable.

In particular, as the Herald/Telegraph story says, the tradeoff might be unfavourable for old-enough people who hadn’t already had a heart attack or stroke. That’s one important reason for the difference between the two stories.  The research only looked at people who had previously had a heart attack or stroke (or some similar reason to take aspirin). The BBC story focused mostly on these people (who should still take aspirin, but also maybe an anti-ulcer drug); the Herald/Telegraph story focused mostly on those taking aspirin purely as a precaution.

So, even though the Herald/Telegraph story was going for the scare headlines, the content was potentially helpful: absent any news coverage, the healthy aspirin users would be less likely to bring up the issue with their doctors.

 

December 22, 2016

Mouthwash secrets: the embargo problem

On Tuesday, the Herald and some other media outlets, and the occasional journalist’s Twitter account published a story about mouthwash being able to prevent gonorrhea from spreading. Or, in some versions, cure it.  The research paper behind the story wasn’t linked and hadn’t been published. This time it seems to have been the newspapers’ fault: the stories appeared before the end of the news embargo.  The Herald story was pulled, then reappeared midday Wednesday with a link (yay)

Embargoes are an increasingly controversial topic in science journalism. The idea is that journalists get advance copies of a research paper and the press release, so they have time to look things up and ask for expert help or comment. There are organisations such as the NZ Science Media Centre to help with finding experts, or there’s your friendly neighbourhood university.

Sometimes, this works. Stories become more interesting and less slanted, or the journalist just decides the breakthrough wasn’t all that and the story is killed.  Without embargoes, allegedly, no-one would take the time to get it right. In medicine, too, there was the idea that doctors should be able to get the research paper by the time their patients saw the headlines.

On the other hand, embargoes feed into the idea that science stories are Breaking News that must be posted Right Now — that all published science is true (or important) for fifteen minutes. Ivan Oransky (who runs the Embargo Watch blog) argued recently at Vox that embargoes are no longer worthwhile; there’s also a rebuttal posted at Embargo Watch.

The Listerine/gonorrhea story, though, wasn’t new. Major outlets such as Teen Vogue and the BBC covered it in August(probably from a conference presentation). There are no details in the new Herald story that weren’t in the August stories.  It’s hard to see how anyone gains from the embargo here — except perhaps as a way of synchronising a wave of publicity.

 

 

November 26, 2016

Where good news and bad news show up

In the middle of last year, the Herald had a story in the Health & Wellbeing section about solanezumab, a drug candidate for Alzheimer’s disease. The lead was

The first drug that slows down Alzheimer’s disease could be available within three years after trials showed it prevented mental decline by a third.

Even at the time, that was an unrealistically hopeful summary. The actual news was that solanezumab had just failed in a clinical trial, and its manufacturers, Eli Lilly, were going to try again, in milder disease cases, rather than giving up.

That didn’t work, either.  The story is in the Herald, but now in the Business section. The (UK) Telegraph, where the Herald’s good-news story came from, hasn’t yet mentioned the bad news.

If you read the health sections of the media you’d get the impression that cures for lots of diseases are just around the corner. You shouldn’t have to read the business news to find out that’s not true.