Posts written by Thomas Lumley (1843)

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient

August 29, 2016

Lucky lotto stores

From the Northern Advocate

An unprecedented run of success in selling winning Lotto second division winning tickets has a Whangarei store on tenterhooks expecting an even bigger win soon.

Now, in one sense this is rubbish: lotto is drawn randomly. Previous wins can’t function as an outward and visible sign of a inward propensity to sell lucky tickets, because there is no such thing.

On the other hand, statistically, you would expect a store that has sold a lot of winning tickets in the past to sell a lot of winning tickets in the future. That’s because a store that has sold a lot of winning tickets has probably just sold a lot of tickets.

A ‘lucky’ lotto vendor will usually be one that’s made a lot of profits for Lotto New Zealand. As to whether its customers are lucky, well, you don’t tend to see stories like this set in Herne Bay or Thorndon.

Briefly

  • 538 has a new Twitter bot, censusAmericans, which tweets little descriptions of individuals from what I think must be the American Community Survey, though they describe it as the census.
  • “Relaxing Privacy Vow, WhatsApp Will Share Some Data With Facebook” (NY Times). “Relaxing” is such a nice way to put that, but as various people have pointed out, this is what happens when companies build up volumes of data.
  • A nice app for exploring how differences in some measurement (‘biomarker’) between groups of people (fail to) translate into reliable tests
August 20, 2016

Briefly

  • Mining data from Lending Club.  And Matt Levine’s comments: Here are 50 data points about this loan. Do what you want….. And if there’s no field for “does this person have another LendingClub loan,” and if that data point would have been helpful, well, sometimes that happens.
  • It’s just gone Saturday in the US, so it is no longer National Potato Day, and it won’t be National Spumoni Day until Sunday. Nathan Yau has a graphic of the 214 days that are National <some food> Day.
  • Because genetic association studies are (or were) largely done in people of European ancestry, they can overpredict risks in everyone else. (NY Times). (The implication that this is also true of non-genetic research is, at least, exaggerated)

The statistical significance filter

Attention conservation notice: long and nerdy, but does have pictures.

You may have noticed that I often say about newsy research studies that they are are barely statistically significant or they found only weak evidence, but that I don’t say that about large-scale clinical trials. This isn’t (just) personal prejudice. There are two good reasons why any given evidence threshold is more likely to be met in lower-quality research — and while I’ll be talking in terms of p-values here, getting rid of them doesn’t solve this problem (it might solve other problems).  I’ll also be talking in terms of an effect being “real” or not, which is again an oversimplification but one that I don’t think affects the point I’m making.  Think of a “real” effect as one big enough to write a news story about.

evidence01

This graph shows possible results in statistical tests, for research where the effect of the thing you’re studying is real (orange) or not real (blue).  The solid circles are results that pass your statistical evidence threshold, in the direction you wanted to see — they’re press-releasable as well as publishable.

Only about half the ‘statistically significant’ results are real; the rest are false positives.

I’ve assumed the proportion of “real” effects is about 10%. That makes sense in a lot of medical and psychological research — arguably, it’s too optimistic.  I’ve also assumed the sample size is too small to reliably pick up plausible differences between blue and yellow — sadly, this is also realistic.

evidence02

In the second graph, we’re looking at a setting where half the effects are real and half aren’t. Now, of the effects that pass the threshold, most are real.  On the other hand, there’s a lot of real effects that get missed.  This was the setting for a lot of clinical trials in the old days, when they were done in single hospitals or small groups.

evidence03

The third case is relatively implausible hypotheses — 10% true — but well-designed studies.  There are still the same number of false positives, but many more true positives.  A better-designed study means that positive results are more likely to be correct.

evidence04

Finally, the setting of well-conducted clinical trials intended to be definitive, the sort of studies done to get new drugs approved. About half the candidate treatments work as intended, and when they do, the results are likely to be positive.   For a well-designed test such as this, statistical significance is a reasonable guide to whether the effect is real.

The problem is that the media only show a subset of the (exciting) solid circles, and typically don’t show the (boring) empty circles. So, what you see is

evidence05

where the columns are 10% and 50% proportion of studies having a true effect, and the top and bottom rows are under-sized and well-design studies.

 

Knowing the threshold for evidence isn’t enough: the prior plausibility matters, and the ability of the study to demonstrate effects matters. Apparent effects seen in small or poorly-designed studies are less likely to be true.

August 19, 2016

Has your life improved since 1966?

From Pew Research, is life better than 50 years ago for people like you?

3_1

The answers aren’t going to mean much about reality, more about the sort of people we are or want to think we are.  As Fred Clark puts it

If you ask those of us who are 18-53 years old for our opinions about what life was like before we either existed or have any memory, we’ll give you an answer. And that speculative, possibly even informed, opinion may mean something or other in the aggregate. Maybe it tells us something fuzzy about general optimism or pessimism. Or maybe something about the dismal state of history, social studies, civics and science education.

Or, for the people who do have memories of the mid-sixties…

Age 65-70: I peaked in high school. Go away, nerd, or I’ll give you a swirlie.

August 18, 2016

Post-truth data maps

The Herald has a story “New map compares breast sizes around the world”. They blame news.com.au as the immediate cause, but a very similar story at the Daily Mail actually links to where it got the map.  You might wonder how the data were collected (you might wonder why, too). The journalist did get as far as that:

The breast map doesn’t reveal how the cup sizes were measured, it’s fair to say tracking bra purchases per country would be an ideal – and maybe a little weird – approach.

Rigorously deidentified pie

footypie

Via Dale Warburton on Twitter, this graph comes from page 7 of the 2016 A-League Injury Report (PDF) produced by Professional Footballers Australia — the players’ association for the round-ball game.  It seems to be a sensible and worthwhile document, except for this pie chart. They’ve replaced the club names with letters, presumably for confidentiality reasons. Which is fine. But the numbers written on the graph bear no obvious relationship to the sizes of the pie wedges.

It’s been a bad week for this sort of thing: a TV barchart that went viral this week had the same sort of problem.

August 17, 2016

Official statistics

There has been some controversy about changes to how unemployment is computed in the Household Labour Force Survey. As StatsNZ had explained, the changes would be back-dated to March 2007, to allow for comparisons.  However, from Stuff earlier this week:

In a media release Robertson, Labour’s finance spokesman, said National was “actively massaging official unemployment statistics” by changing the measure for joblessness to exclude those using websites, such as Seek or TradeMe.

Robertson was referring to the Household Labour Force Survey, due to be released on Wednesday, which he says would “almost certainly show a decrease in unemployment” as a result of the Government “manipulating official data to suit its own needs”.

Mr Robertson has since withdrawn this claim, and is now saying

“I accept the Chief Statistician’s assurances on the reason for the change in criteria but New Zealanders need to be aware that National Ministers have a track record of misusing and misrepresenting statistics.”

That’s a reasonable position — and some of the examples have appeared on StatsChat — but I don’t think the stories in the media have made it clear how serious the original accusation was (even if perhaps unintentionally).

Official statistics such as the unemployment estimates are politically sensitive, and it’s obvious why governments would want to change them. Argentina, famously, did this to their inflation estimates. As a result, no-one believed Argentinian economic data, which gets expensive when you’re trying to borrow money. For that reason, sensible countries structure their official statistics agencies to minimise political influence, and maximise independence.  New Zealand does have a first-world official statistics system — unlike many countries with similar economic resources — and it’s a valuable asset that can’t be taken for granted.

The system is set up so the Government shouldn’t have the ability to “actively massage” official unemployment statistics for minor political gain. If they did, well, ok, it was hyperbole when I said on Twitter ‘we’d need to go through StatsNZ with fire and the sword’, but the Government Statistician wouldn’t be the only one who’d need replacing.

August 15, 2016

Graph of the week

From a real estate agent who will remain nameless

IMAG0280

Another example of the rule ‘if you have to write out all the numbers, the graph isn’t doing its work.”

August 11, 2016

Selective risk awareness

Disease risk awareness is one of the ways the media can help public health, and infections caused by waterborne organisms are one of the world’s leading public health problems. That’s not why this story is at the top  of  Stuff’s front page:

amoeba

If you click through, you find that she was in the US, that primary amoebic meningoencephalitis is extremely rare — with about three cases a year in the US — and that the last NZ case was during the Muldoon administration.

The children around the world who die every minute from more common water-borne infections mostly aren’t the right sort of children to make good clickbait in New Zealand.