November 4, 2016

Unpublished clinical trials

We’ve known since at least the 1980s that there’s a problem with clinical trial results not being published. Tracking the non-publication rate is time-consuming, though.  There’s a new website out that tries to automate the process, and a paper that claims it’s fairly accurate, at least for the subset of trials registered at ClinicalTrials.gov.  It picks up most medical journals and also picks up results published directly at ClinicalTrials.gov — an alternative pathway for boring results such as dose equivalence studies for generics.

Here’s the overall summary for all trial organisers with more than 30 registered trials:

all

The overall results are pretty much what people have been claiming. The details might surprise you if you haven’t looked into the issue carefully. There’s a fairly pronounced difference between drug companies and academic institutions — the drug companies are better at publishing their trials.

For example, compare Merck to the Mayo Clinic
merck mayo

It’s not uniform, but the trend is pretty clear.

 

Fighting wrinkles

Q: So, lots of good health news today!

A: <suspiciously> Yes?

Q: Eating tomatoes prevents wrinkles and skin cancer! And it’s going to be tomato season soon.

A: Not convinced

Q: Why? Did the people have to eat too many tomatoes? Is that even possible?

A: No tomatoes were involved in the study. People took capsules of oil with tomato extract high in lycopene or lutein.

Q: Sounds a bit of a waste. But still, reducing wrinkles and sun damage generally must be good.

A: They didn’t measure wrinkles or skin cancer either.

Q: So what did they measure?

A: Activity of some genes related to skin damage by ultraviolet light.

Q: And these were significantly reduced, right?

A: Yes, but ‘significantly’ here just means ‘detectably’. It doesn’t necessarily translate into a lot of protection.

Q: Do they have an estimate of how much protection?

A: The Herald story says an earlier study found taking lycopene supplements to be as effective as an SPF 1.3 sunscreen.

Q: Only SPF 13? Still, if that’s just from the supplement it’s pretty impressive.

A: Not 13. SPF 1.3.

Q: Ok, so that’s not so impressive. But tomato season and sunscreen season peak at the same time, and every bit helps.

A: Actually, if it really is the lycopene, your horiatiki salad isn’t going to work — lycopene isn’t well absorbed from fresh tomatoes.

 

November 3, 2016

Briefly

  • Story about startup company claiming to tailor wine advice to your genome. “Their motto of ‘A little science and a lot of fun’ would be more accurately put as ‘No science and a lot of fun,’”
  • “US Broadband Providers Will Need Permission to Collect Private Data” from the New York Times. Providers get to see exactly what websites you visit and how many pages you read there.  And they know where you live and where you internet. Selling that information will now be opt-in.
  • Insurance firm Swiss Re thinks health insurance rates will soon be targeted using social media. But the heart-disease research they mention only looked at predicting the heart disease for county of residence — and even before Big Data insurance companies have known where you live.
  • Along the same lines, car insurance firm Admiral was planning to set rates for young drivers based on social media data. Facebook is Not Happy. But actually, this  looks more like AMI’s current advertising pitch “We treat young drivers like good drivers”.  You get people to sign up, and raise their rates if you find out they aren’t good drivers.
  • SMBC comic on survivor bias: “Nobody wants to read about the hero who left the farm and immediately got stabbed by highwaymen”
  • Insider trading involves misuse of “material, non-public information.” With predictive analytics, it gets much harder to decide what’s material and what’s non-public
November 2, 2016

Lotto demographics

The headlines at both the Herald and Stuff say they’re about Lotto winners, but the vastly more numerous losers have to have basically the same demographics. That means any statistics drawn from a group of 12 winners are going to be very unreliable.

There some more reliable sources.  There’s (limited) information released by NZ Lotteries under the Official Information Act.  There’s also more detailed survey data from the 2012 Health and Lifestyles Survey (PDF)

Of the 12 people in today’s stories, 11 were men, even though men and women play Lotto at about the same rate. There’s a lot less variation by household income than I would have guessed. There is some variation by ethnicity, with Asians being less likely to play Lotto. People under 25 are a bit less likely to play. It’s all pretty boring.

I’ve complained a few times that clicky bogus polls have an error rate as bad as a random sample of about ten people, and are useless.  Here we have a random sample of about ten people, and it’s pretty useless.

Except as advertising.

 

October 31, 2016

Give a dog a bone?

From the Herald (via Mark Hanna)

Warnings about feeding bones to pets are overblown – and outweighed by the beneficial effect on pets’ teeth, according to pet food experts Jimbo’s.

and

To back up their belief in the benefits of bones, Jimbo’s organised a three-month trial in 2015, studying the gums and teeth of eight dogs of various sizes.

Now, I’m not a vet. I don’t know what the existing evidence is on the benefits or harms of bones and raw food in pets’ diets. The story indicates that it’s controversial. So does Wikipedia, but I can’t tell whether this is ‘controversial’ as in the Phantom Time Hypothesis or ‘controversial’ as in risks of WiFi or ‘controversial’ as in the optimal balance of fats in the human diet. Since I don’t have a pet, this doesn’t worry me. On the other hand, I do care what the newspapers regard as reliable evidence, and Jimbo’s ‘Bone A Day’ Dental Trial is a good case to look at.

There are two questions at issue in the story: is feeding bones to dogs safe, and does it prevent gum disease and tooth damage? The small size of the trial limits what it can say about both questions, but especially about safety.  Imagine that a diet including bones resulted in serious injuries for one dog in twenty, once a year on average. That’s vastly more dangerous than anyone is actually claiming, but 90% of studies this small would still miss the risk entirely.  A study of eight dogs for three months will provide almost no information about safety.

For the second question, the small study size was aggravated by gum disease not being common enough.  Of the eight dogs they recruited, two scored ‘Grade 2’ on the dental grading, meaning “some gum inflammation, no gum recession“, and none scored worse than that.   Of the two dogs with ‘some gum inflammation’, one improved.  For the other six dogs, the study was effectively reduced to looking at tartar — and while that’s presumably related to gum and tooth disease, and can lead to it, it’s not the same thing.  You might well be willing to take some risk to prevent serious gum disease; you’d be less willing to take any risk to prevent tartar.  Of the four dogs with ‘Grade 1: mild tartar’, two improved.  A total of three dogs improving out of eight isn’t much to go on (unless you know that improvement is naturally very unusual, which they didn’t claim).

One important study-quality issue isn’t clear: the study description says the dental grading was based on photographs, which is good. What they don’t say is when the photograph evaluation was done.  If all the ‘before’ photos were graded before the study and all the ‘after’ photos were graded afterwards, there’s a lot of room for bias to creep in to the evaluation. For that reason, medical studies are often careful to mix up ‘before’ and ‘after’ or ‘treated’ and ‘control’ images and measure them all at once.  It’s possible that Jimbo’s did this, and that person doing the grading didn’t know which was ‘before’ and which was ‘after’ for a given dog. If before-after wasn’t masked this way, we can’t be very confident even that three dogs improved and none got worse.

And finally, we have to worry about publication bias. Maybe I’m just cynical, but it’s hard to believe this study would have made the Herald if the results had been unfavourable.

All in all, after reading this story you should still believe whatever you believed previously about dogfood. And you should be a bit disappointed in the Herald.

Stat of the Week Competition: October 29 – November 4 2016

Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.

Here’s how it works:

  • Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday November 4 2016.
  • Statistics can be bad, exemplary or fascinating.
  • The statistic must be in the NZ media during the period of October 29 – November 4 2016 inclusive.
  • Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.

Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.

(more…)

October 30, 2016

Suboptimal ways to present risk

Graeme Edgeler nominated this, from PBS Frontline, to @statschat as a bad graph

frontline

It’s actually almost a good graph, but I think it’s trying to do too many things at once. There are two basic numerical facts: the number of people trying to cross the Mediterranean to escape the Syrian crisis has gone down substantially; the number of deaths has stayed about the same.

If you want to show the increase in risk, it’s much more effective to use a fixed, round denominator —  the main reason to use this sort of graph is that people pick up risk information better as frequencies than as fractions.

Here’s the comparison using the same denominator, 269, for the two years. It’s visually obvious that there has been a three-fold increase in death rate.

20152016

It’s harder to convey all the comparisons clearly in one graph. A mosaic plot would work for higher proportions, which we can all hope doesn’t become a relevant fact.

 

Briefly

  • A long post on the use and misuse of the ‘Twitter firehose’, from Bloomberg View
  • A long story at Stuff about discharge without conviction, though a bit undermined by the fact that, as the story says, “[the] number of discharges without conviction has plummeted, from 3189 in 2011, to 2103 in 2015,
  • While the idea of  predicting the US election using mako sharks (carchariamancy?) is no sillier than psychic meerkats or lucky lotto retailers, I don’t think the story really works unless the people pushing it at least pretend to believe it.
  • On the other hand, some people did seriously argue that shark attacks affected the results of presidential elections. And were wrong
October 29, 2016

Uncertainty and symmetry in the US elections

Nate Silver’s predictions at 538 give Donald Trump a much higher chance of winning the election than anyone else’s: at the time of writing, 20% vs 8% from the Upshot, 5% from Daily Kos, or 1% from Sam Wang at the Princeton Election Consortium.

That’s mostly not because Nate Silver thinks Trump is doing much better: 538 estimates 326 Electoral College votes for Clinton; Daily Kos has 334; the Princeton folks have 335.  The popular vote margin is estimated as 5.7% by 538 and about 8.4% by Princeton (their ‘meta-margin’ is 4.2%).

Everyone also pretty much agrees that the uncertainty in the votes is symmetric: if the polls are wrong, the estimated support for Clinton could as easily be too high as too low.  But that’s the uncertainty in the margin, not in the chance of winning.  Probabilities can’t go above 100% or below 0%, and when they get close to these limits, a symmetric uncertainty in the vote margin has to turn into an asymmetric uncertainty in the probability prediction, and a larger uncertainty has to pull the probability further away from the boundaries.

Nate Silver’s model thinks that opinion polls can be off by 6 or 7 percent in either direction even this close to the elections; the others don’t. It’s question that history can’t definitively answer, because there isn’t enough  history to work with. If Silver is wrong, we won’t know even after the election; even if he’s right, the most likely outcome is for the results to look pretty much like everyone predicts.

October 28, 2016

False positives

Before a medical diagnostic test is introduced, it is supposed to be evaluated carefully for accuracy. In particular, if the test is going to be used on the whole population, it’s important to know the false positive rate: of the people who test positive, what proportion really have a problem?  Part of  this process is to make sure that the test works as a biological or chemical assay: is it accurately measuring, say, carbon monoxide or glucose in the blood.  But that’s only part of the process.  You also need to worry about what threshold to use — how high is ‘high’ — and whether people could have high carbon monoxide levels without being smokers, or high glucose levels without being diabetic.

I haven’t heard any suggestion that the tests for methamphetamine contamination in houses fail the first step. There’s meth present when they find it. But Housing NZ were treating the high assay value as evidence that (a) the house was dangerous to live in, and (b) that the tenant was responsible. The false positive rates for (a) and (b) were not established, and appear to be shockingly high given the consequences.

The Ministry of Health has now released new guidelines on meth contamination, with concentration thresholds based on evidence (though towards the low end of what their evidence would support).  They claim to have repeatedly warned Housing NZ. Russell Brown has an excellent summary of the situation at Public Address.

While this is all a step forward, it’s not addressing the question of (b) above: if there’s methamphetamine present at above the new action threshold, it appears that this is still going to be taken as evidence of the tenant’s culpability. That would only make sense if, contrary to the advertising from the meth-testing companies, low-level meth contamination were very rare in rented NZ houses.