March 23, 2017

This ain’t exactly real

From Matthew Beveridge on Twitter, who also has a post on reporting standards.

C7gZ6asXUAAMMUf

In a poll of 1000 people, if we make the (generous) assumption that it’s a uniform random sample, these minor-party results mean 8 people for The Opportunities Party, 7 for the Māori Party, 4 for ACT and 4 for UnitedFuture.  That’s not a very impressive lead.

A 95% confidence interval (the equivalent of the usual margin of error statement, using my cheatsheet)  for TOP would be 0.3% to 1.6%; for the Māori Party 0.3% to 1.4%, and for ACT and UnitedFuture 0.1 to 1.0%.  There’s a lot of overlap.   A 95% confidence interval for the ratio of TOP to ACT support goes from 0.63 to 7.5, showing that more-sophisticated analysis confirming the conclusion: ACT could be more popular than TOP, or much less popular, but we don’t have enough information to be sure. Obviously, the same thing will be true only more so for the TOP/Māori comparison.

But there’s a more important problem: for at least three of these parties, the party vote is relatively unimportant. The Māori Party [correction] has won one  list seat in its history, and neither ACT and UnitedFuture has for the past two elections.  What matters most for these parties is their support in their key electorates.  It’s hard to poll for electorate support, both because it’s hard to sample from electorates and because you have to distinguish party vote and electorate vote intentions reliably. Judging from past election campaigns, we’ll probably get some polling of the Māori electorates, but it’s unlikely we’ll get any useful polling of Ōhāriu and Epsom.  However, I would feel pretty safe in predicting that ACT will end up with a seat, and much less confident that TOP will.

Democracy is coming

We have an election this year, so we are starting to have polling.

To save time, here are some potentially useful StatsChat posts about election polls:

  • A simple cheatsheet for working out the margin of error for minor parties (also including a simple Excel macro)
March 21, 2017

Super 18 Predictions for Round 5

Team Ratings for Round 5

The basic method is described on my Department home page.

Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Hurricanes 17.79 13.22 4.60
Chiefs 10.88 9.75 1.10
Crusaders 7.98 8.75 -0.80
Lions 7.32 7.64 -0.30
Highlanders 7.27 9.17 -1.90
Brumbies 4.06 3.83 0.20
Stormers 2.68 1.51 1.20
Waratahs 1.75 5.81 -4.10
Blues 0.98 -1.07 2.10
Sharks 0.70 0.42 0.30
Bulls -1.40 0.29 -1.70
Jaguares -1.70 -4.36 2.70
Force -8.10 -9.45 1.40
Cheetahs -8.14 -7.36 -0.80
Reds -9.75 -10.28 0.50
Rebels -11.97 -8.17 -3.80
Kings -17.66 -19.02 1.40
Sunwolves -19.78 -17.76 -2.00

 

Performance So Far

So far there have been 34 matches played, 25 of which were correctly predicted, a success rate of 73.5%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Crusaders vs. Blues Mar 17 33 – 24 10.70 TRUE
2 Rebels vs. Chiefs Mar 17 14 – 27 -19.70 TRUE
3 Bulls vs. Sunwolves Mar 17 34 – 21 23.70 TRUE
4 Hurricanes vs. Highlanders Mar 18 41 – 15 12.40 TRUE
5 Waratahs vs. Brumbies Mar 18 12 – 28 3.50 FALSE
6 Lions vs. Reds Mar 18 44 – 14 19.80 TRUE
7 Sharks vs. Kings Mar 18 19 – 17 24.60 TRUE
8 Jaguares vs. Cheetahs Mar 18 41 – 14 8.20 TRUE

 

Predictions for Round 5

Here are the predictions for Round 5. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Crusaders vs. Force Mar 24 Crusaders 20.10
2 Rebels vs. Waratahs Mar 24 Waratahs -10.20
3 Blues vs. Bulls Mar 25 Blues 6.40
4 Brumbies vs. Highlanders Mar 25 Brumbies 0.80
5 Sunwolves vs. Stormers Mar 25 Stormers -18.50
6 Kings vs. Lions Mar 25 Lions -21.50
7 Cheetahs vs. Sharks Mar 25 Sharks -5.30
8 Jaguares vs. Reds Mar 25 Jaguares 12.10

 

NRL Predictions for Round 4

Team Ratings for Round 4

The basic method is described on my Department home page.

Here are the team ratings prior to this week’s games, along with the ratings at the start of the season.

Current Rating Rating at Season Start Difference
Raiders 8.94 9.94 -1.00
Storm 8.44 8.49 -0.00
Sharks 5.65 5.84 -0.20
Broncos 5.58 4.36 1.20
Cowboys 4.58 6.90 -2.30
Panthers 4.51 6.08 -1.60
Roosters 0.98 -1.17 2.10
Eels 0.20 -0.81 1.00
Bulldogs -1.04 -1.34 0.30
Rabbitohs -2.01 -1.82 -0.20
Sea Eagles -2.63 -2.98 0.30
Titans -2.70 -0.98 -1.70
Dragons -4.15 -7.74 3.60
Wests Tigers -6.57 -3.89 -2.70
Warriors -7.48 -6.02 -1.50
Knights -14.34 -16.94 2.60

 

Performance So Far

So far there have been 24 matches played, 11 of which were correctly predicted, a success rate of 45.8%.
Here are the predictions for last week’s games.

Game Date Score Prediction Correct
1 Storm vs. Broncos Mar 16 14 – 12 7.20 TRUE
2 Bulldogs vs. Warriors Mar 17 24 – 12 10.10 TRUE
3 Titans vs. Eels Mar 17 26 – 14 -1.50 FALSE
4 Knights vs. Rabbitohs Mar 18 18 – 24 -9.50 TRUE
5 Panthers vs. Roosters Mar 18 12 – 14 8.80 FALSE
6 Cowboys vs. Sea Eagles Mar 18 8 – 30 16.50 FALSE
7 Raiders vs. Wests Tigers Mar 19 46 – 6 15.20 TRUE
8 Sharks vs. Dragons Mar 19 10 – 16 16.80 FALSE

 

Predictions for Round 4

Here are the predictions for Round 4. The prediction is my estimated expected points difference with a positive margin being a win to the home team, and a negative margin a win to the away team.

Game Date Winner Prediction
1 Rabbitohs vs. Roosters Mar 23 Rabbitohs 0.50
2 Panthers vs. Knights Mar 24 Panthers 22.30
3 Broncos vs. Raiders Mar 24 Broncos 0.10
4 Sea Eagles vs. Bulldogs Mar 25 Sea Eagles 1.90
5 Eels vs. Sharks Mar 25 Sharks -2.00
6 Titans vs. Cowboys Mar 25 Cowboys -3.80
7 Wests Tigers vs. Storm Mar 26 Storm -11.50
8 Dragons vs. Warriors Mar 26 Dragons 7.30

 

Believing surveys

There was a story on Stuff yesterday claiming that 85% of Kiwis brush their teeth too hard, based on a survey by a company that sells soft toothbrushes. The survey involved over 1000 people, and that’s about all we know, except that the reported rate of twice-per-day brushing was about 15 percentage points higher than in the 2009 NZ Oral Health Survey.

Mark Hanna tweeted about the press release by survey issue

research

I want to expand on this.  Why is research, especially survey research, different?  When David Fisher interviews people involved with gambling addiction, we don’t need anything more than his story. When Fletcher Building says they are cutting $150m from their profit estimates, we don’t need anything more than their press release. So why isn’t enough that this toothbrush company says they have a survey?

The issue is responsibility.  If an investigative journalist reports statements from people, it’s the journalist’s reputation that makes those reports credible. If there are anonymous sources, again it’s the reputation of the journalist and the newspaper that makes us believe the sources really exist and are their claims are credible.  When a company says it’s introducing a new product or is revising its income estimates, the company is the only authoritative source of information, and the claims are treated as mere claims by the company, not as facts.

With a survey press release, the journalist typically isn’t vouching for the correctness of the interpretation or the validity of the methodology; that’s not their expertise.  And we can’t tell from the toothbrush story whether it was a real survey, or a well-calibrated online panel, or whether it was just a bogus clicky poll on a website somewhere. There’s no attribution, and there’s no responsibility. Even so, the claims don’t get treated as mere advertising; they get reported in basically the same way as all research findings.

If we’re going to treat a survey of this sort as showing anything,  and if the journalist isn’t vouching for the information, the minimum standard is that we can find out what was done.  The company doesn’t need to nerd up its press release with details, but they can put them on a website somewhere — how they found people, what the response rate was, something about who they sampled, what the actual questions were. Or, if their survey was done by a reputable market research firm, tell us that, and at least we know someone who understands the issues is standing behind the claims.

March 20, 2017

A blood test for autism?

There’s a story on Radio NZ Nine to Noon on a blood test for autism that is supposedly 96% accurate, and on what its implications might be for earlier specialised care of kids on the autism spectrum. As you’d expect from Radio NZ, the questions about the implications of a test were good.  The interview also raised the question of whether the test, developed on kids aged 3-10, would work equally well on younger children where the benefit is potentially greater.

What got missed out on a little was the meaning of “96% accurate”.  In the research study, 81 of 83 children on the autism spectrum and 73 of 76 neurotypical kids with no history of behavioral or neurologic abnormalities were correctly diagnosed, which is very impressive. But that’s not how the test would be used. In practice you might be screening the whole population, or screening kids with relatives on the autism spectrum, or diagnosing kids where ASD is suspected.

For whole-population screening, the 3.9% false-positive rate is more of a problem.  Based on current US statistics, roughly 1.5% of children are on the autism spectrum. So, of 1000 kids tested, about 15 would be correctly diagnosed and about 39 would be false positives. Now, it could be that the benefit to the 15 is much larger than the harm to the 39 and the test is worthwhile, but if we were going to have policy discussions about the test in that context, “96% accuracy” wouldn’t be a helpful way to describe it.

For children who have relatives with ASD you’d expect it to be harder to separate out ‘neurotypical’ from not. And that’s what the research shows:

journal.pcbi.1005385.g003

The red and blue curves show the variation in blood test results for unrelated neurotypical and autism-spectrum children. The yellow shows the results for neurotypical siblings of children on the autism spectrum: there’s a lot less separation, and more tests will be wrong. Again, the test might be accurate enough to be useful, but “96% accuracy” is misleading.

Perhaps the way the test could be most helpful is as a step in diagnosis for children whose behaviour makes parents or doctors suspect ASD. There, the proportion on the autism spectrum might be more similar to the 50% in the study. However, we’ve currently got no idea how accurate the test is in such children; no-one has looked.

Stat of the Week Competition: March 18 – 24 2017

Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.

Here’s how it works:

  • Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday March 24 2017.
  • Statistics can be bad, exemplary or fascinating.
  • The statistic must be in the NZ media during the period of March 18 – 24 2017 inclusive.
  • Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.

Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.

(more…)

March 18, 2017

Briefly

  • A guy tracked all the words his son learnt in the first 20 months, and made graphs.
  • Based on survey data, Peter Beinart argues in the Atlantic “those who don’t regularly attend church are more likely to suffer from divorce, addiction, and financial distress.” Fred Clark (who does regularly attend church) has a different suggestion “It’s also entirely possible — and quite likely once you allow yourself to think about it — that all of these things make church-going more difficult and less likely.” 
  • From the archive that noted.co.nz now has online, Mark Broatch interviews Eula Bliss about her book on vaccination.
  • Apps that use your phone to monitor your health or nudge you towards better behaviour have obvious potential.  One of the first evaluations of an app baseed on Apple’s ResearchKit shows that the potential isn’t matched by actual. “only 131 participants took at least a week’s worth of surveys and a six-month milestone survey. That’s 1.7 percent”. Story at Ars Technica, research paper
  • The new newsroom.co.nz has a good story on the history and uptake of a treatment for premature babies that came from New Zealand research on sheep.
  • Mark Hanna has an interactive graphic comparing how often NZ Police use different tactical options for stopping people, by ethnicity.
  • Does it matter how long you’ve already waited for a bus? In New York, not really.
  • Nat Dudley talks about colour-blindness and accessibility of graphics.

Two cheers for genomics

PCSK9 inhibitors are one of the high-profile stories of genomics in medical research.  The gene’s function was previously unknown: it was identified as important for cholesterol metabolism by genetic studies.  People with mutations that increase the activity of the protein have high LDL cholesterol; people with mutations that destroy the activity have low LDL cholesterol. And, importantly, there’s at least one person walking around and alive and healthy with mutations breaking both her copies of the gene, so inhibiting it looked relatively safe.  It was an obvious target for drug development and a showcase for the benefits of large-scale genetic research.

Three drug companies have made injectable antibodies that block the activity of PCSK9 and dramatically lower LDL cholesterol. One dropped out last year because their drug got attacked by the patients’ immune systems.  We’re now seeing the first results of clinical trials looking at whether the LDL cholesterol reduction leads to fewer heart attacks.

From a New Zealand point of view, the results are mostly of theoretical interest. Pharmac isn’t likely to subsidise these treatments for large groups of people any time soon.  However, we do still care what the trials show, because they help answer some questions about cholesterol. The research paper is here; two good commentaries on it are here and here

Amgen’s drug, evolocumab, reduced LDL cholesterol by about two-thirds (from an average of 2.3 mmol/l to 0.78 mmol/l). The combined rate of heart attack, stroke, and death from heart disease was 20% lower; there was only a 15% reduction in the longer shopping list of bad events that the study put its money on for the primary analysis.

So, first, lowering LDL cholesterol by a different mechanism from statins has also resulted in lower heart attack rates.  That reinforces the evidence that LDL cholesterol really matters; it’s not just a marker like smoke from a fire.  Given the largely failed efforts to improve health with drugs that raise HDL cholesterol, this is good to know. Second, lowering LDL cholesterol this way seems fairly safe. There wasn’t any detectable harm (apart from the localised symptoms of the injections themselves). There could be rarer or more subtle effects, of course.

And finally, while the results are qualitatively positive, the actual scale of the benefit is a bit disappointing.  With an average 2.2 years of followup for 13784 people in the treatment group, about 200 heart attacks, strokes, or heart disease deaths were postponed or prevented.  At current US prices of $14,000 per year, that would cost over US$420 million.

So, two cheers for genomics in drug development.

March 17, 2017

Is ibuprofen killing you?

The Herald story starts off

Commonly bought over-the-counter painkillers including ibuprofen have been linked to a significant increased risk of cardiac arrest.

The research paper is here (but paywalled).

First, it’s important to remember that “significant” in this context means “detectable” rather than “important.” The risk was higher by about 30%, but cardiac arrest is fairly rare.  With ten years of complete data from Denmark (about 5.5 million people) the researchers accumulated 30,000 cardiac arrests: that’s about five cases per ten thousand people per year.

As usual, this is observational data looking at correlations; the harmful effect, if it’s real, is too small to see reliably in clinical trials.  The researchers used a clever study design where they compared use of painkillers in a cardiac-arrest patient both with the same patient at times in the past and with different patients at the same time.  Differences between people that are constant over time (like smoking) will cancel out of the analysis; differences over time that are constant between people (like season) will also cancel out.  The design doesn’t cancel out non-constant factors like starting an exercise programme that leaves your muscles and joints sore.  It’s not unreasonable that a risk difference this small could be explained by confounding factors.

There’s something more important wrong with the story, though. You might wonder how people who have cardiac arrest get asked about their painkiller use. They didn’t; the study used prescription data.  For many of the painkillers, prescription is the only source; in particular, that’s the case for diclofenac (Voltaren), where the apparent risk increase in the study was a bit larger.

Ibuprofen, however, is available over the counter in Denmark, just as it is here. It’s available in fairly small packages, and is labelled for short-term use, just as it is here. Over-the-counter sale is what the story is basically about, but the study didn’t look at over-the-counter use at all.