Posts filed under Risk (190)

January 21, 2016

Mining uncertainty

The FDA collects data on adverse events in people taking any prescription drugs. This information is, as it should be, available for other uses. I’ve been involved in research using it.

The data are also available for less helpful purposes. As Scott Alexander found,  if you ask Google whether basically anything could cause basically anything, there are companies that make sure Google will return some pages reporting that precise association.  And, as he explains, this is serious.

For example, I tried “Adderall” and “plantar fasciitis” as an implausible combination and got 4 hits based on FDA data. And “Accutane” and “plantar fasciitis”, and “Advair” and “plantar fasciitis”, and “acyclovir” and “plantar fasciitis”. Then I got bored.

It’s presumably true that there are people who have been taking Adderall and at the same time have had plantar fasciitis. But given enough patients to work with, that will be true for any combination of drug and side effect. And, in fact, the websites will happily put up a page saying there are no reported cases, but still saying “you are not alone” and suggesting you join their support group.

These websites are bullshit in the sense of philosopher Harry Frankfurt: it is irrelevant to their purpose whether Adderall really causes plantar fasciitis or not. They make their money from the question, not from the answer.

 

(via Keith Ng)

January 19, 2016

Rebooting your immune system?

OneNews had a strange-looking story about multiple sclerosis tonight, with lots of footage of one British guy who’d got much better after treatment, and some mentions of an ongoing trial. With the trial still going on, it wasn’t clear why there was publicity now, or why it mostly involved just one patient.

I Google these things so you don’t have to.

So. It turns out there was a new research paper behind the publicity. There is an international trial of immune stem cell transplant for multiple sclerosis, which plans to follow patients for five years after treatment. The research paper describes what happened for the first three years.

As the OneNews story says, there has been a theory for a long time that if you wipe out someone’s immune system and start over again, the new version wouldn’t attack the nervous system and the disease would be cured. The problem was two-fold. First, wiping out someone’s immune system is an extraordinarily drastic treatment — you give a lethal dose of chemotherapy, and then rescue the patient with a transplanted immune system. Second, it didn’t work reliably.

The researcher behind the current trial believes that the treatment would work reliably if it was done earlier — during one of the characteristic remissions in disease progress, rather than after all else fails. This trial involves 25 patients, and so far the results are reasonably positive, but three years is really to soon to tell whether the benefits are worth the treatment. Even with full follow-up of this uncontrolled study it probably won’t be clear exactly who the treatment is worthwhile for.

Why the one British guy? Well,

The BBC’s Panorama programme was given exclusive access to several patients who have undergone the stem cell transplant.

The news story is clipped from a more in-depth current-affairs programme. That BBC link also shows a slightly worrying paranoid attitude from the lead researcher

He said: “There has been resistance to this in the pharma and academic world. This is not a technology you can patent and we have achieved this without industry backing.”

That might explain pharma, but there’s no real reason for the lack of patents to be a problem for academics. It’s more likely that doctors are reluctant to recommend ultra-high-dose chemotherapy without more concrete evidence. After all, it was supposed to work for breast cancer and didn’t, and it was theorised to work for HIV and doesn’t seem to. And at least in the past it didn’t work reliably for multiple sclerosis.

All in all, I think the OneNews story was too one-sided given the interim nature of the data and lack of availability of the treatment.  It could also have said a bit more about how nasty the treatment is.  I can see it being fine as part of a story in a current affairs programme such as Panorama, but as TV news I think it went too far.

January 18, 2016

The buck needs to stop somewhere

From Vox:

Academic press offices are known to overhype their own research. But the University of Maryland recently took this to appalling new heights — trumpeting an incredibly shoddy study on chocolate milk and concussions that happened to benefit a corporate partner.

Press offices get targeted when this sort of thing happens because they are a necessary link in the chain of hype.  On the other hand, unlike journalists and researchers, their job description doesn’t involve being skeptical about research.

For those who haven’t kept up with the story: the research is looking at chocolate milk produced by a sponsor of the study, compared to other sports drinks. The press release is based on preliminary unpublished data. The drink is fat-free, but contains as much sugar as Coca-Cola. And the press release also says

“There is nothing more important than protecting our student-athletes,” said Clayton Wilcox, superintendent of Washington County Public Schools. “Now that we understand the findings of this study, we are determined to provide Fifth Quarter Fresh to all of our athletes.”

which seems to have got ahead of the evidence rather.

This is exactly the sort of story that’s very unlikely to be the press office’s fault. Either the researchers or someone in management at the university must have decided to put out a press release on preliminary data and to push the product to the local school district. Presumably it was the same people who decided to do a press release on preliminary data from an earlier study in May — data that are still unpublished.

In this example the journalists have done fairly well: Google News shows that coverage of the chocolate milk brand is almost entirely negative.  More generally, though, there’s the problem that academics aren’t always responsible for how their research is spun, and as a result they always have an excuse.

A step in the right direction would be to have all research press releases explicitly endorsed by someone. If that person is a responsible member of the research team, you know who to blame. If it’s just a publicist, well, that tells you something too.

January 1, 2016

As dangerous as bacon?

From the Herald (from the Telegraph)

Using e-cigarettes is no safer than smoking tobacco with nicotine, scientists warned after finding the vapour damages DNA and could cause cancer.

Smoking tobacco is right up near the top of cancer risks that are easy to acquire, both in terms of how big the risk is and in terms of how strong the evidence is.

[There was some stuff here that was right as to the story in the Herald but wrong about the actual research paper, so I got rid of it. Some of the tests in the research paper used real cigarette smoke, and it was worse but not dramatically worse than the e-cig smoke]

 

The press release is a bit more responsibly written than the story. It describes some of the limitations of the lab tests, and makes it clear that the “no safer than smoking” is an opinion, not a finding. It also gets the journal name right (Oral Oncology) and links to the research paper.

It’s worth quoting the conclusion section from the paper. Here the researchers are writing for other people who understand the issues and whose opinion matters. I’ve deleted one sentence that’s technical stuff basically saying “we saw DNA damage and cell death”

In conclusion, our study strongly suggests that electronic cigarettes are not as safe as their marketing makes them appear to the public. [technical stuff]. Further research is needed to definitively determine the long-term effects of e-cig usage, as well as whether the DNA damage shown in our study as a result of e-cig exposure will lead to mutations that ultimately result in cancer.

That’s very different from the story.

December 14, 2015

A sense of scale

It was front page news in the Dominion Post today that about 0.1% of registered teachers had been investigated for “possible misconduct or incompetence in which their psychological state may have been a factor.”  Over a six year period. And 5% of them (that is, 0.005% of all teachers) were struck off or suspended as a result.

Actually, the front page news was even worse than that:CWKJ22nUwAEguz2

 

but since the “mentally-ill” bit wasn’t even true, the online version has been edited.

Given the high prevalence of some of these psychological and neurological conditions and the lack of a comparison group, it’s not even clear that they increase the risk of being investigated or struck off . After all, an early StatsChat story was about a Dom Post claim that “hundreds of unfit teachers” were working in our schools, based on 664 complaints over two years.

It would be interesting to compare figures for, say, rugby players or journalists. Except that would be missing the other point.  As Jess McAllen writes at The Spinoff, the phrasing and placement of the story, especially the original one, is a clear message to anyone with depression, or anxiety, or ADHD. Anyone who wants to think about the children might think about what that message does for rather more than 0.1% of them.

(via @publicaddress)

November 27, 2015

What should data use agreements look like?

After the news about Jarrod Gilbert being refused access to crime data, it’s worth looking at what data-use agreements should look like. I’m going to just consider agreements to use data for one’s own research — consulting projects and commissioned reports are different.

On Stuff, the police said

“Police reserves the right to discuss research findings with the academic if it misunderstands or misrepresents police data and information,” Evans said. 

Police could prevent further access to police resources if a researcher breached the agreement, he said. 

“Our priority is always to ensure that an appropriate balance is drawn between the privacy of individuals and academic freedom.

That would actually be reasonable if it only went that far: an organisation has confidential data, you get to see the data, they get to check whether you’ve reported anything that would breach their privacy restrictions. They can say “paragraph 2, on page 7, the street name together with the other information is identifying”, and you can agree or disagree, and potentially get an independent opinion from a mediator, ombudsman, arbitrator, or if it comes to that, a court.

The key here is that a breach of the agreement is objectively decidable and isn’t based on whether they like the conclusions. The problem comes with discretionary use of data. If the police have discretion about what analyses can be published, there’s no way to tell whether and to what extent they are misusing it. Even if they have only discretion about who can use the data, it’s hard to tell if they are using the implied threat of exclusion to persuade people to change results.

Medical statistics has a lot of experience with this sort of problem. That’s why the International Committee of Medical Journal Editors says, in their ‘conflict of interest’ recommendations

Authors should avoid entering in to agreements with study sponsors, both for-profit and non-profit, that interfere with authors’ access to all of the study’s data or that interfere with their ability to analyze and interpret the data and to prepare and publish manuscripts independently when and where they choose.

Under the ICMJE rules, I believe the sort of data-use restrictions we heard about for crime data would have to be disclosed as a conflict of interest.  The conflict wouldn’t necessarily lead to a paper being rejected, but it would be something for editors and reviewers to bear in mind as they looked at which results were presented and how they were interpreted.

 

 

November 25, 2015

Why we can’t trust crime analyses in New Zealand

Jarrod Gilbert has spent a lot of time hanging out with people in biker gangs.

That’s how he wrote his book, Patched, a history of gangs in New Zealand.  According to the Herald, it’s also the police’s rationale for not letting him have access to crime data. I don’t know whether it would be more charitable to the Police to accept that this is their real reason or not.

Conceivably, you might be concerned about access to these data for people with certain sorts of criminal connections. There might be ways to misuse the data, perhaps for some sort of scam on crime victims. No-one suggests that is  the sort of association with criminals that Dr Gilbert has.

It gets worse. According to Dr Gilbert, also writing in the Herald, the standard data access agreement for the data says police “retain the sole right to veto any findings from release.” Even drug companies don’t get away with those sorts of clauses nowadays.

To the extent these reports are true, we can’t entirely trust any analysis of New Zealand crime data that goes beyond what’s publicly available. There might be a lot of research that hasn’t been affected by censorship and threats to block future work, but we have no way of picking it out.

November 15, 2015

Out of how many?

Stuff has a story under the headline ACC statistics show New Zealand’s riskiest industries. They don’t. They show the industries with the largest numbers of claims.

To see why that’s a problem, consider instead the number of claims by broad ethnicity grouping: 135000 for European, 23100 for Māori, 10800 for Pacific peoples(via StatsNZ). There’s no way that European ethnicity gives you a hugely greater risk of occupational injury than Māori or Pacific workers have. The difference between these groups is basically just population size. The true risks go in the opposite direction: 89 claims per 1000 full-time equivalent workers of European ethnicities, 97 for Māori, and 106 for Pacific.

With just the total claims we can’t tell whether working in supermarkets and grocery stores is really much more dangerous than logging, as the story suggests. I’m dubious, but.

November 13, 2015

Blood pressure experiments

The two major US medical journals each published  a report this week about an experiment on healthy humans involving blood pressure.

One of these was a serious multi-year, multi-million-dollar clinical trial in over 9000 people, trying to refine the treatment of high blood pressure. The other looks like a borderline-ethical publicity stunt.  Guess which one ended up in Stuff.

In the experiment, 25 people were given an energy drink

We hypothesized that drinking a commercially available energy drink compared with a placebo drink increases blood pressure and heart rate in healthy adults at rest and in response to mental and physical stress (primary outcomes). Furthermore, we hypothesized that these hemodynamic changes are associated with sympathetic activation, which could predispose to increased cardiovascular risk (secondary outcomes).

The result was that consuming caffeine made blood pressure and heart rate go up for a short period,  and that levels of the hormone norepinephrine  in the blood also went up. Oh, and that consuming caffeine led to more caffeine in the bloodstream than consuming no caffeine.

The findings about blood pressure, heart rate, and norepinephrine are about as surprising as the finding about caffeine in the blood. If you do a Google search on “caffeine blood pressure”, the recommendation box at the top of the results is advice from the Mayo Clinic. It begins

Caffeine can cause a short, but dramatic increase in your blood pressure, even if you don’t have high blood pressure.

The Mayo Clinic, incidentally, is where the new experiment was done.

I looked at the PubMed research database for research on caffeine and blood pressure.  The oldest paper in English for which I could get full text was from 1981. It begins

Acute caffeine in subjects who do not normally ingest methylxanthines leads to increases in blood pressure, heart rate, plasma epinephrine, plasma norepinephrine, plasma renin activity, and urinary catecholamines.

This wasn’t news already in 1981.

Now, I don’t actually like energy drinks; I prefer my caffeine hot and bitter.  Since many energy drinks have as much caffeine as good coffee and some have almost as much sugar as apple juice, there’s probably some unsafe level of consumption, especially for kids.

What I don’t like is dressing this up as new science. The acute effects of caffeine on the cardiovascular system have been known for a long time. It seems strange to do a new human experiment just to demonstrate them again. In particular, it seems ethically dubious if you think these effects are dangerous enough to put out a press release about.

 

November 6, 2015

Failure to read small print

smallprint

This story/ad/column hybrid thing on the Herald site is making a good point, that people don’t read the detailed terms and conditions of things. Of course, reading the terms and conditions of things before you agree is often infeasible — I have read the Auckland Transport HOP card T&Cs, but I don’t reread them to make sure they haven’t changed every time I agree to them by getting on a bus, and it’s not as if I have much choice, anyway.  When the small print is about large sums of money, reading it is probably more important.

The StatsChat-relevant aspect, though is the figure of $1000 per year for failing to read financial small print, which seemed strange. The quote:

Money Advice Service, a government-backed financial help centre in the UK, claims failure to read the small print is costing consumers an average of £428 (NZ$978) a year. It surveyed 2,000 consumers and found that only 84 per cent bothered to read the terms and conditions and, of those that did, only 17 per cent understood what they had read.

Here’s the press release (PDF) from Money Advice Service.  It surveyed 3000 people, and found that 84 per cent claimed they didn’t read the terms and conditions.

The survey asked people how much they believed misunderstanding financial terms in the last year had cost them. The average cost was £427.90.

So the figure is a bit fuzzier: it’s the average of what people reported believing they lost, which actually makes it more surprising. If you actually believed you, personally, were losing nearly a thousand dollars a year from not reading terms and conditions, wouldn’t you do something about it?

More importantly, it’s not failure to read the small print, it’s failure to understand it. The story claims only 17% of those who claimed to read the T&Cs thought they understood them — though I couldn’t find this number in the press release or on the Money Advice site, it is in the Mirror and, unsourced, in the Guardian.  The survey claims about a third misunderstood what ‘interest’  meant and of the 15% who had taken out a payday loan, more than half couldn’t explain what a ‘loan’ was, and one in five didn’t realise loans needed to be paid back.

As further evidence that either the survey is unreliable or that it isn’t a simple failure to read that’s the problem, there was very little variation between regions of the UK in how many people said they read the small print, but huge variation (£128-£1014in how much they said it cost them.

I’m not convinced we can trust this survey, but it’s not news that some people make unfortunate financial choices.  What would be useful is some idea of how often it’s really careless failure to read, how often it’s lack of basic education, how often it’s gotchas in the small print, and how often it’s taking out a loan you know is bad because the alternatives are worse.