Posts filed under Medical news (339)

January 10, 2018

Complete balls

The UK’s Metro magazine has a dramatic story under the headline Popping ibuprofen could make your balls shrivel up

Got a pounding headache?

You might just want to give a big glass of water and a nap a go before reaching for the painkillers. Scientists warn that ibuprofen could be wrecking men’s fertility by making their balls shrivel up.

Sounds pleasant.

Fortunately, that’s not what the study showed.

The story goes on

Researchers looked at 31 male participants and found that taking ibuprofen reduced production of testosterone by nearly a quarter in the space of around six weeks.

That’s also almost completely untrue. In fact, the research paper says (emphasis added)

We investigated the levels of total testosterone and its direct downstream metabolic product, 17β-estradiol. Administration of ibuprofen did not result in any significant changes in the levels of these two steroid hormones after 14 d or at the last day of administration at 44 d. The levels of free testosterone were subsequently analyzed by using the SHBG levels. Neither free testosterone nor SHBG levels were affected by ibuprofen.

Stuff has a much better take on this one:

Men who take ibuprofen for longer than the bottle advises could be risking their fertility, according to a new study.

Researchers found that men who took ibuprofen for extended periods had developed a condition normally seen in elderly men and smokers that, over time, can lead to fertility problems

Ars Technica has the more accurately boring headline Small study suggests ibuprofen alters testosterone metabolism.

The study involved 14 men taking the equivalent of six tablets a day of ibuprofen for six weeks (plus a control group). Their testosterone levels didn’t change, but the interesting research finding is that this was due to compensation for what would otherwise have been a decrease. That is, a hormone signalling to increase testosterone production was elevated.  There’s a potential risk that if the men kept taking ibuprofen at this level for long enough, the compensation process might give up. And that would potentially lead to fertility problems — though not (I don’t think) to the problems Metro was worried about.

So, taking ibuprofen for months on end without a good reason? Probably inadvisable. Like it says on the pack.


November 23, 2017

More complicated than that

Science Daily

Computerized brain-training is now the first intervention of any kind to reduce the risk of dementia among older adults.

Daily Telegraph

Pensioners can reduce their risk of dementia by nearly a third by playing a computer brain training game similar to a driving hazard perception test, a new study suggests.

Ars Technica

Speed of processing training turned out to be the big winner. After ten years, participants in this group—and only this group—had reduced rates of dementia compared to the controls

The research paper is here, and the abstract does indeed say “Speed training resulted in reduced risk of dementia compared to control, but memory and reasoning training did not”

They’re overselling it a bit. First, these are intervals showing the ratios of number of cases with and without the three types of treatment, including the uncertainty


Summarising this as “speed training works but the other two don’t” is misleading.  There’s pretty marginal evidence that speed training is beneficial and even less evidence that it’s better than the other two.

On top of that, the results are for less than half the originally-enrolled participants, the ‘dementia’ they’re measuring isn’t a standard clinical definition, and this is a study whose 10-year follow-up ended in 2010 and that had a lot of ‘primary outcomes’ it was looking for — which didn’t include the one in this paper.

The study originally expected to see positive results after two years. It didn’t. Again, after five years, the study reported “Cognitive training did not affect rates of incident dementia after 5 years of follow-up.”  Ten-year results reported in 2014, showed relatively modest differences in people’s ability to take care of themselves, as Hilda Bastian commented.

So. This specific type of brain training might actually help. Or one of the other sorts of brain training they tried might help. Or, quite possibly, none of them might help.  On the other hand, these are relatively unlikely to be harmful, and maybe someone will produce an inexpensive app or something.

September 30, 2017

Simple and ineffective

Q: Did you see there’s a new test to predict dementia?

A: Another one?

Q: Yes, the Herald says it  “would allow drugs and lifestyle changes, such as a healthy diet and more exercise, to be more effective before the devastating condition takes hold.

A: That would make more sense if there were drugs and lifestyle changes that actually worked to stop the disease process.

Q: At least it’s a simple one and accurate test. It’s just based on your sense of smell.

A: <dubious noises>

Q: But  “almost all the participants, aged 57 to 85, who were unable to name a single scent had been diagnosed with dementia. And nearly 80 per cent of those who provided just one or two correct answers also had it,

A: That’ s not what the research says

Q: It’s what the story says.

A: Yes. Yes, it is.

Q: Ok, what does the research say? It’s behind a paywall

A: Here’s a graph

Q: That matches the story, doesn’t it?

A: Check the axis labels.

Q: Oh. 8% and 10%? But couldn’t the labels just be wrong?

A: Rather than the Daily Mail? It’s possible, but the research paper also says “9% positive predictive value”, meaning that only 9% of those who are predicted to get dementia actually do, and that matches the graph.

Q: Um

A: And there’s a commentary in the same issue of the journal, headlined  Screening Is Not Benign and saying “No test with such a low [positive predictive value] would be taken seriously as a way to identify any disease in a population”

Q: But it’s still a big difference, isn’t it.

A: Yes, and it’s scientifically interesting that the nerves or brain cells related to smell seem to be damaged relatively early in the disease, but it’s not a predictive test.


[Update: the source for the error seems to be the University of Chicago press release.]

[Update: It’s on Stuff, too]

September 18, 2017

Another Alzheimer’s test

There’s a new Herald story with the lead

Artificial intelligence (AI) can identify Alzheimer’s disease 10 years before doctors can discover the symptoms, according to new research.

The story doesn’t link (even to the Daily Mail). Before we get to that, regular StatsChat readers will have some idea of what to expect.

Early diagnosis for Alzheimer’s is potentially useful when designing clinical trials for new treatments, and eventually will be useful for early treatment (when we get treatments that work).  But not yet.  It’s also not as much of a novelty as the story suggests. Candidate tests for early diagnosis are appearing all over the place (here’s seven of them).

Second, you’d expect that the accuracy of the test and its degree of foresight to have been exaggerated — and the story confirms this.

Following the training, the AI was then asked to process brains from 148 subjects – 52 were healthy, 48 had Alzheimer’s disease and 48 had mild cognitive impairment (MCI) but were known to have developed Alzheimer’s disease two and a half to nine years later.

That is, the early diagnosis wasn’t of people without symptoms, it was of people whose symptoms had led to a diagnosis but didn’t amount to dementia

The Herald doesn’t link, but Google finds a story at New Scientist, and they do link. The link is to the arXiv preprint server. That’s unusual: normally this sort of story is either complete vapour or is based on an article in a research journal.  This one is neither: it’s a real scientific report, but one that hasn’t yet been published — it’s probably undergoing peer review at the moment.

Anyway, the preprint is enough to look up the accuracy of the test. The sensitivity was high: nearly all Alzheimer’s cases and cases of Mild Cognitive Impairement were picked up. The specificity was terrible: more than 1/4 of people tested would receive a false positive diagnosis.

It’s possible that this test can be re-tuned into a genuinely useful clinical tool. As published, though, it isn’t even close.

August 21, 2017

Effective treatment is effective

There’s a story in New Scientist, and in the NY Daily News, based on this research paper, saying that choosing alternative treatment instead of conventional treatment for cancer is bad for you.

The research is well done: they looked at the most common cancers in the US and found a small set of people who turned down all conventional treatment in favour of ‘alternative’ medicine.  They matched these people on cancer type, age, clinical group stage, what other disease they had, insurance type, race, and year of diagnosis, to a set who did get conventional treatment.   Even after all that matching, there was a big difference in survival.

There are two caveats to the story. First, this is people who turned down all conventional treatment, even surgery. That’s rare. In the database they used, 99.98% of patients received some conventional treatment. It’s much more common for people to receive some or all of the recommended conventional treatment, plus other things — not ‘alternative’ but ‘complementary’ or ‘integrative’ medicine.

Second, the numbers are being misinterpreted.  For example, New Scientist says

Among those with breast cancer, people taking alternative remedies were 5.68 times more likely to die within five years.

The actual figures were 42% and 13%, so about 3.1 times more likely. Here’s the graph

Similarly, the New Scientist story says

They found that people who took alternative medicine were two and half times more likely to die within five years of diagnosis.

The actual figures were 45% and 26%; 1.75 times more likely.

What’s happening is a confusion of rate ratios and actual risks of death; these aren’t the same.  The rate (or hazard) is measured in % per year; the risk is measured in %.  The risk is capped at 100%; the rate doesn’t have an upper limit.   Because of the cap at 100%, risk ratios are mathematically less convenient to model than rate ratios. As a tradeoff, it’s harder to explain your results using rate ratios. The Yale publicity punted on the issue, not mentioning the numbers and leaving reporters to get it wrong.  When this happens, it’s the scientists’ fault, not the reporters’.


June 14, 2017

Comparing sources

The Herald has a front-page-link “Daily aspirin deadlier than we thought”, for a real headline “Daily aspirin behind more than 3000 deaths a year, study suggests”.  The story (from the Daily Telegraph) begins

Taking a daily aspirin is far more dangerous than was thought, causing more than 3000 deaths a year, a major study suggests.

Millions of pensioners should reconsider taking pills which are taken by almost half of elderly people to ward off heart attacks and strokes, researchers said.

The study by Oxford University found that those over the age of 75 who take the blood-thinning pills are 10 times more likely than younger patients to suffer disabling or fatal bleeds.

The BBC also has a report on this research. Their headline is Aspirin ‘major bleed’ warning for over-75s, and the story starts

People over 75 taking daily aspirin after a stroke or heart attack are at higher risk of major – and sometimes fatal – stomach bleeds than previously thought, research in the Lancet shows.

Scientists say that, to reduce these risks, older people should also take stomach-protecting PPI pills.

But they insist aspirin has important benefits – such as preventing heart attacks – that outweigh the risks.

The basic message from the same underlying research seems very different. Sadly, neither story links to the open-access research paper, which has very good sections on the background to the research and what this new study added.

Basically, we know that aspirin reduces blood clotting.  This has good effects — reducing the risk of heart attacks and strokes — and also bad effects — increasing the risk of bleeding.   We do randomised trials to find out whether the benefits exceed the risks, and in the randomised trials they did for aspirin. However, the randomised trials were mostly in people under 75.

The new study looks at older people, but it wasn’t a randomised trial: everyone in the study was taking aspirin, and there was no control group.  The main comparisons were by age. Serious stomach bleeding was a lot more common in the oldest people in the study, so unless the beneficial effects of aspirin were also larger in these people, the tradeoff might no longer be favourable.

In particular, as the Herald/Telegraph story says, the tradeoff might be unfavourable for old-enough people who hadn’t already had a heart attack or stroke. That’s one important reason for the difference between the two stories.  The research only looked at people who had previously had a heart attack or stroke (or some similar reason to take aspirin). The BBC story focused mostly on these people (who should still take aspirin, but also maybe an anti-ulcer drug); the Herald/Telegraph story focused mostly on those taking aspirin purely as a precaution.

So, even though the Herald/Telegraph story was going for the scare headlines, the content was potentially helpful: absent any news coverage, the healthy aspirin users would be less likely to bring up the issue with their doctors.


December 22, 2016

Mouthwash secrets: the embargo problem

On Tuesday, the Herald and some other media outlets, and the occasional journalist’s Twitter account published a story about mouthwash being able to prevent gonorrhea from spreading. Or, in some versions, cure it.  The research paper behind the story wasn’t linked and hadn’t been published. This time it seems to have been the newspapers’ fault: the stories appeared before the end of the news embargo.  The Herald story was pulled, then reappeared midday Wednesday with a link (yay)

Embargoes are an increasingly controversial topic in science journalism. The idea is that journalists get advance copies of a research paper and the press release, so they have time to look things up and ask for expert help or comment. There are organisations such as the NZ Science Media Centre to help with finding experts, or there’s your friendly neighbourhood university.

Sometimes, this works. Stories become more interesting and less slanted, or the journalist just decides the breakthrough wasn’t all that and the story is killed.  Without embargoes, allegedly, no-one would take the time to get it right. In medicine, too, there was the idea that doctors should be able to get the research paper by the time their patients saw the headlines.

On the other hand, embargoes feed into the idea that science stories are Breaking News that must be posted Right Now — that all published science is true (or important) for fifteen minutes. Ivan Oransky (who runs the Embargo Watch blog) argued recently at Vox that embargoes are no longer worthwhile; there’s also a rebuttal posted at Embargo Watch.

The Listerine/gonorrhea story, though, wasn’t new. Major outlets such as Teen Vogue and the BBC covered it in August(probably from a conference presentation). There are no details in the new Herald story that weren’t in the August stories.  It’s hard to see how anyone gains from the embargo here — except perhaps as a way of synchronising a wave of publicity.



November 26, 2016

Where good news and bad news show up

In the middle of last year, the Herald had a story in the Health & Wellbeing section about solanezumab, a drug candidate for Alzheimer’s disease. The lead was

The first drug that slows down Alzheimer’s disease could be available within three years after trials showed it prevented mental decline by a third.

Even at the time, that was an unrealistically hopeful summary. The actual news was that solanezumab had just failed in a clinical trial, and its manufacturers, Eli Lilly, were going to try again, in milder disease cases, rather than giving up.

That didn’t work, either.  The story is in the Herald, but now in the Business section. The (UK) Telegraph, where the Herald’s good-news story came from, hasn’t yet mentioned the bad news.

If you read the health sections of the media you’d get the impression that cures for lots of diseases are just around the corner. You shouldn’t have to read the business news to find out that’s not true.

November 4, 2016

Unpublished clinical trials

We’ve known since at least the 1980s that there’s a problem with clinical trial results not being published. Tracking the non-publication rate is time-consuming, though.  There’s a new website out that tries to automate the process, and a paper that claims it’s fairly accurate, at least for the subset of trials registered at  It picks up most medical journals and also picks up results published directly at — an alternative pathway for boring results such as dose equivalence studies for generics.

Here’s the overall summary for all trial organisers with more than 30 registered trials:


The overall results are pretty much what people have been claiming. The details might surprise you if you haven’t looked into the issue carefully. There’s a fairly pronounced difference between drug companies and academic institutions — the drug companies are better at publishing their trials.

For example, compare Merck to the Mayo Clinic
merck mayo

It’s not uniform, but the trend is pretty clear.


October 4, 2016

Depression and the pill

There’s a recent paper from Denmark finding that women, particularly young women, who used hormonal contraceptives were more likely also to be diagnosed with  depression.  The Guardian has a sensible story reporting on the paper (though given the topic it’s a pity the external experts they talked to were both men). There’s also an opinion piece, which conveys the importance of the issue but is clearly written by someone whose opinions were decided before the research came out. I was asked on Twitter what I thought.

One of the more difficult cases for science communication is where the evidence is neither negligible nor overwhelming, and that’s the situation here.  There’s nothing intrinsically unlikely about an effect on depression, and there are some ways that this study is very good, but there are also some limitations to the data that make the evidence weaker.

First, the good points. The study involved the entire Danish population over nearly 20 years, meaning that it was large enough to be fairly reliable on whether correlations are present or not, and also that it was comprehensive — it didn’t miss people out.  The data on who used hormonal contraceptives comes from the national health system and so should be accurate. The two definitions of depression — ‘prescribed anti-depressant drugs’ and ‘psychiatrist diagnosis of depression’ — will be measured reliably, and the decisions will have been taken by people who don’t have any particular view on the study question.  There’s information on timing, so we know the contraceptives were used before the depression. The associations are strong enough to care about, but not so strong as to be implausible. The analysis is well done given the data.

However, there are at least two alternative explanations for the correlation that aren’t ruled out by these data. The first is that the depression definitions require seeing a doctor and asking for (or at least accepting) treatment, and women who take hormonal contraceptives are probably more likely to see a doctor regularly.  The second explanation, which the researchers do consider, is that break-ups of relationships are a cause of depression, especially in younger people, and being in these relationships might be related to using hormonal contraceptives.  The researchers don’t believe this explanation, and they may be right, but their data don’t rule it out.

It’s not that either of these explanations is necessarily more likely than a direct effect of hormones, but if there weren’t alternative explanations the evidence would be stronger.  For example, if the researchers had been able to compare women using hormonal contraceptives just to those using non-hormonal contraceptives (eg copper IUDs and condoms), and had still seen the same correlation, the second explanation would be much less plausible and the evidence for a direct effect would be more convincing.

Also, if there were a straightforward hormonal explanation I would have expected different types of contraceptive to have stronger or weaker associations according to the dose of, say, progestins. In fact what they saw is that less commonly used contraceptives had stronger associations: weakest for the combined pills, stronger for progestin-only ‘mini-pills’ and stronger still for patch and implant methods. Again, this certainly doesn’t rule out a direct effect, but it weakens the evidence.

If a similar study were done in another country with different patterns of contraceptive use and found similar results, the evidence would become stronger. A study with fewer women but more detailed information on mental and emotional health — such as one of the birth-cohort studies — might be able to say more about what leads up to episodes of depression in young women and might be able to say something about who is at most risk. There’s still going to be uncertainty.

So. It’s hard to say for sure. There is definitely some evidence that hormonal contraceptives increase the risk of depression. If the effect is real, it’s useful to know that it seems to be largely in women under 20, largely in the first year of use, and might be worse for the ‘mini-pill’ than the traditional pill.  There’s a lot already known — good and bad — about hormonal contraceptives, but this research paper does add something.