Posts filed under Risk (222)

March 14, 2016

Dementia and rugby

Dylan Cleaver has a feature story in the Herald on the Taranaki rugby team who won the Ranfurly Shield in 1964. Five of the 22 have been diagnosed with dementia. Early on in the process he asked me to comment on how surprising that was.

The key fact here is 1964: the five developed dementia fairly young, in their 60s and early 70s. That happens even in people who have no family history and no occupational risks, as I know personally, but it’s unusual.

I couldn’t find NZ data, but I did find a Dutch study (PDF, Table 3) estimating that a man who is alive and healthy at 55 has a 1.5% risk of diagnosed dementia by 70 and 3.2% by 75. There’s broadly similar data from the Framingham study in the US.   The chance of getting 5 or more out of 22 depends on exact ages and on how many died earlier of other causes, but if these were just 22 men chosen at random the chance would be less than 1 in 10,000 — probably much less.  People who know about rugby tell me the fact they were all in the back line is also relevant, and that makes the chance much smaller.

There are still at least two explanations. The first, obviously, is that rugby — at least as played in those days — caused similar cumulative brain damage to that seen in American football players. The second, though, is that we’re hearing about the 1964 Taranaki team partly because of the dementia cases — there wouldn’t have been this story if there had only been two cases, and there might have been a story about some other team instead. That is, it could be a combination of a tragic fluke and the natural human tendency to see patterns.  Statistics is bad at disentangling these; the issue crops up over and over again in cancer surveillance.

In the light of what has been seen in the US, I’d say it’s plausible that concussions contributed to the Taranaki cases.  There have already been changes to the game to reduce repeated concussions, which should reduce the risk in the future. There is also a case for more systematic evaluation of former players, to get a more reliable estimate of the risk, though the fact there’s nothing that can currently be done about it means that players and family members need to be involved in that decision.

March 9, 2016

Not the most literate?

The Herald (and/or the Otago Daily Times) say

 New Zealand is the fifth most literate country in the world.

and

New Zealand ranked higher than Germany (9), Canada (10), the US (11), UK (14) and Australia (15).

Newshub had a similar story and the NZEI welcomed the finding.  One of the nice things about the Herald story is it provides a link. If you follow that link, the ratings look a bit different.

literacy

There are five other rankings in addition to the “Final Rank”, but none of them has NZ at number five.

lit2

So, where did the numbers come from? It can’t be a mistake at the Herald, because Newshub had the same numbers (as did Finland Todayand basically everyone except the Washington Post)

Although nobody links, I did track down the press release. It has the ranks given by the Herald, and it has the quotes they used from the creator of the ranking.  The stories would have been written before the site went live, so the reporters wouldn’t have been able to check the site even if it had occurred to them to do so.  I have no idea how the press release managed to disagree with the site itself, and while it would be nice to see corrections published, I won’t hold my breath.

 

Underlying this relatively minor example is a problem with the intersection of ‘instant news’ and science that I’ve mentioned before.  Science stories are often written before the research is published, and often released before it is published. This is unnecessary except for the biggest events: the science would be just as true (or not) and just as interesting (or not) a day later.

At least the final rank still shows NZ beating Australia.

January 21, 2016

Mining uncertainty

The FDA collects data on adverse events in people taking any prescription drugs. This information is, as it should be, available for other uses. I’ve been involved in research using it.

The data are also available for less helpful purposes. As Scott Alexander found,  if you ask Google whether basically anything could cause basically anything, there are companies that make sure Google will return some pages reporting that precise association.  And, as he explains, this is serious.

For example, I tried “Adderall” and “plantar fasciitis” as an implausible combination and got 4 hits based on FDA data. And “Accutane” and “plantar fasciitis”, and “Advair” and “plantar fasciitis”, and “acyclovir” and “plantar fasciitis”. Then I got bored.

It’s presumably true that there are people who have been taking Adderall and at the same time have had plantar fasciitis. But given enough patients to work with, that will be true for any combination of drug and side effect. And, in fact, the websites will happily put up a page saying there are no reported cases, but still saying “you are not alone” and suggesting you join their support group.

These websites are bullshit in the sense of philosopher Harry Frankfurt: it is irrelevant to their purpose whether Adderall really causes plantar fasciitis or not. They make their money from the question, not from the answer.

 

(via Keith Ng)

January 19, 2016

Rebooting your immune system?

OneNews had a strange-looking story about multiple sclerosis tonight, with lots of footage of one British guy who’d got much better after treatment, and some mentions of an ongoing trial. With the trial still going on, it wasn’t clear why there was publicity now, or why it mostly involved just one patient.

I Google these things so you don’t have to.

So. It turns out there was a new research paper behind the publicity. There is an international trial of immune stem cell transplant for multiple sclerosis, which plans to follow patients for five years after treatment. The research paper describes what happened for the first three years.

As the OneNews story says, there has been a theory for a long time that if you wipe out someone’s immune system and start over again, the new version wouldn’t attack the nervous system and the disease would be cured. The problem was two-fold. First, wiping out someone’s immune system is an extraordinarily drastic treatment — you give a lethal dose of chemotherapy, and then rescue the patient with a transplanted immune system. Second, it didn’t work reliably.

The researcher behind the current trial believes that the treatment would work reliably if it was done earlier — during one of the characteristic remissions in disease progress, rather than after all else fails. This trial involves 25 patients, and so far the results are reasonably positive, but three years is really to soon to tell whether the benefits are worth the treatment. Even with full follow-up of this uncontrolled study it probably won’t be clear exactly who the treatment is worthwhile for.

Why the one British guy? Well,

The BBC’s Panorama programme was given exclusive access to several patients who have undergone the stem cell transplant.

The news story is clipped from a more in-depth current-affairs programme. That BBC link also shows a slightly worrying paranoid attitude from the lead researcher

He said: “There has been resistance to this in the pharma and academic world. This is not a technology you can patent and we have achieved this without industry backing.”

That might explain pharma, but there’s no real reason for the lack of patents to be a problem for academics. It’s more likely that doctors are reluctant to recommend ultra-high-dose chemotherapy without more concrete evidence. After all, it was supposed to work for breast cancer and didn’t, and it was theorised to work for HIV and doesn’t seem to. And at least in the past it didn’t work reliably for multiple sclerosis.

All in all, I think the OneNews story was too one-sided given the interim nature of the data and lack of availability of the treatment.  It could also have said a bit more about how nasty the treatment is.  I can see it being fine as part of a story in a current affairs programme such as Panorama, but as TV news I think it went too far.

January 18, 2016

The buck needs to stop somewhere

From Vox:

Academic press offices are known to overhype their own research. But the University of Maryland recently took this to appalling new heights — trumpeting an incredibly shoddy study on chocolate milk and concussions that happened to benefit a corporate partner.

Press offices get targeted when this sort of thing happens because they are a necessary link in the chain of hype.  On the other hand, unlike journalists and researchers, their job description doesn’t involve being skeptical about research.

For those who haven’t kept up with the story: the research is looking at chocolate milk produced by a sponsor of the study, compared to other sports drinks. The press release is based on preliminary unpublished data. The drink is fat-free, but contains as much sugar as Coca-Cola. And the press release also says

“There is nothing more important than protecting our student-athletes,” said Clayton Wilcox, superintendent of Washington County Public Schools. “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh to all of our athletes.”

which seems to have got ahead of the evidence rather.

This is exactly the sort of story that’s very unlikely to be the press office’s fault. Either the researchers or someone in management at the university must have decided to put out a press release on preliminary data and to push the product to the local school district. Presumably it was the same people who decided to do a press release on preliminary data from an earlier study in May — data that are still unpublished.

In this example the journalists have done fairly well: Google News shows that coverage of the chocolate milk brand is almost entirely negative.  More generally, though, there’s the problem that academics aren’t always responsible for how their research is spun, and as a result they always have an excuse.

A step in the right direction would be to have all research press releases explicitly endorsed by someone. If that person is a responsible member of the research team, you know who to blame. If it’s just a publicist, well, that tells you something too.

January 1, 2016

As dangerous as bacon?

From the Herald (from the Telegraph)

Using e-cigarettes is no safer than smoking tobacco with nicotine, scientists warned after finding the vapour damages DNA and could cause cancer.

Smoking tobacco is right up near the top of cancer risks that are easy to acquire, both in terms of how big the risk is and in terms of how strong the evidence is.

[There was some stuff here that was right as to the story in the Herald but wrong about the actual research paper, so I got rid of it. Some of the tests in the research paper used real cigarette smoke, and it was worse but not dramatically worse than the e-cig smoke]

 

The press release is a bit more responsibly written than the story. It describes some of the limitations of the lab tests, and makes it clear that the “no safer than smoking” is an opinion, not a finding. It also gets the journal name right (Oral Oncology) and links to the research paper.

It’s worth quoting the conclusion section from the paper. Here the researchers are writing for other people who understand the issues and whose opinion matters. I’ve deleted one sentence that’s technical stuff basically saying “we saw DNA damage and cell death”

In conclusion, our study strongly suggests that electronic cigarettes are not as safe as their marketing makes them appear to the public. [technical stuff]. Further research is needed to definitively determine the long-term effects of e-cig usage, as well as whether the DNA damage shown in our study as a result of e-cig exposure will lead to mutations that ultimately result in cancer.

That’s very different from the story.

December 14, 2015

A sense of scale

It was front page news in the Dominion Post today that about 0.1% of registered teachers had been investigated for “possible misconduct or incompetence in which their psychological state may have been a factor.”  Over a six year period. And 5% of them (that is, 0.005% of all teachers) were struck off or suspended as a result.

Actually, the front page news was even worse than that:CWKJ22nUwAEguz2

 

but since the “mentally-ill” bit wasn’t even true, the online version has been edited.

Given the high prevalence of some of these psychological and neurological conditions and the lack of a comparison group, it’s not even clear that they increase the risk of being investigated or struck off . After all, an early StatsChat story was about a Dom Post claim that “hundreds of unfit teachers” were working in our schools, based on 664 complaints over two years.

It would be interesting to compare figures for, say, rugby players or journalists. Except that would be missing the other point.  As Jess McAllen writes at The Spinoff, the phrasing and placement of the story, especially the original one, is a clear message to anyone with depression, or anxiety, or ADHD. Anyone who wants to think about the children might think about what that message does for rather more than 0.1% of them.

(via @publicaddress)

November 27, 2015

What should data use agreements look like?

After the news about Jarrod Gilbert being refused access to crime data, it’s worth looking at what data-use agreements should look like. I’m going to just consider agreements to use data for one’s own research — consulting projects and commissioned reports are different.

On Stuff, the police said

“Police reserves the right to discuss research findings with the academic if it misunderstands or misrepresents police data and information,” Evans said. 

Police could prevent further access to police resources if a researcher breached the agreement, he said. 

“Our priority is always to ensure that an appropriate balance is drawn between the privacy of individuals and academic freedom.

That would actually be reasonable if it only went that far: an organisation has confidential data, you get to see the data, they get to check whether you’ve reported anything that would breach their privacy restrictions. They can say “paragraph 2, on page 7, the street name together with the other information is identifying”, and you can agree or disagree, and potentially get an independent opinion from a mediator, ombudsman, arbitrator, or if it comes to that, a court.

The key here is that a breach of the agreement is objectively decidable and isn’t based on whether they like the conclusions. The problem comes with discretionary use of data. If the police have discretion about what analyses can be published, there’s no way to tell whether and to what extent they are misusing it. Even if they have only discretion about who can use the data, it’s hard to tell if they are using the implied threat of exclusion to persuade people to change results.

Medical statistics has a lot of experience with this sort of problem. That’s why the International Committee of Medical Journal Editors says, in their ‘conflict of interest’ recommendations

Authors should avoid entering in to agreements with study sponsors, both for-profit and non-profit, that interfere with authors’ access to all of the study’s data or that interfere with their ability to analyze and interpret the data and to prepare and publish manuscripts independently when and where they choose.

Under the ICMJE rules, I believe the sort of data-use restrictions we heard about for crime data would have to be disclosed as a conflict of interest.  The conflict wouldn’t necessarily lead to a paper being rejected, but it would be something for editors and reviewers to bear in mind as they looked at which results were presented and how they were interpreted.

 

 

November 25, 2015

Why we can’t trust crime analyses in New Zealand

Jarrod Gilbert has spent a lot of time hanging out with people in biker gangs.

That’s how he wrote his book, Patched, a history of gangs in New Zealand.  According to the Herald, it’s also the police’s rationale for not letting him have access to crime data. I don’t know whether it would be more charitable to the Police to accept that this is their real reason or not.

Conceivably, you might be concerned about access to these data for people with certain sorts of criminal connections. There might be ways to misuse the data, perhaps for some sort of scam on crime victims. No-one suggests that is  the sort of association with criminals that Dr Gilbert has.

It gets worse. According to Dr Gilbert, also writing in the Herald, the standard data access agreement for the data says police “retain the sole right to veto any findings from release.” Even drug companies don’t get away with those sorts of clauses nowadays.

To the extent these reports are true, we can’t entirely trust any analysis of New Zealand crime data that goes beyond what’s publicly available. There might be a lot of research that hasn’t been affected by censorship and threats to block future work, but we have no way of picking it out.

November 15, 2015

Out of how many?

Stuff has a story under the headline ACC statistics show New Zealand’s riskiest industries. They don’t. They show the industries with the largest numbers of claims.

To see why that’s a problem, consider instead the number of claims by broad ethnicity grouping: 135000 for European, 23100 for Māori, 10800 for Pacific peoples(via StatsNZ). There’s no way that European ethnicity gives you a hugely greater risk of occupational injury than Māori or Pacific workers have. The difference between these groups is basically just population size. The true risks go in the opposite direction: 89 claims per 1000 full-time equivalent workers of European ethnicities, 97 for Māori, and 106 for Pacific.

With just the total claims we can’t tell whether working in supermarkets and grocery stores is really much more dangerous than logging, as the story suggests. I’m dubious, but.