Posts written by Thomas Lumley (1213)

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient

July 24, 2014

Weak evidence but a good story

An example from Stuff, this time

Sah and her colleagues found that this internal clock also affects our ability to behave ethically at different times of day. To make a long research paper short, when we’re tired we tend to fudge things and cut corners.

Sah measured this by finding out the chronotypes of 140 people via a standard self-assessment questionnaire, and then asking them to complete a task in which they rolled dice to win raffle tickets – higher rolls, more tickets.

Participants were randomly assigned to either early morning or late evening sessions. Crucially, the participants self-reported their dice rolls.

You’d expect the dice rolls to average out to around 3.5. So the extent to which a group’s average exceeds this number is a measure of their collective result-fudging.

“Morning people tended to report higher die-roll numbers in the evening than the morning, but evening people tended to report higher numbers in the morning than the evening,” Sah and her co-authors wrote.

The research paper is here.  The Washington Post, where the story was taken from, has a graph of the results, and they match the story. Note that this is one of the very few cases where starting a bar chart at zero is a bad idea. It’s hard to roll zero on a standard die.

larks-owls-wapost

 

The research paper also has a graph of the results, which makes the effect look bigger, but in this case is defensible as 3.5 really is “zero” for the purposes of the effect they are studying

lark-owl

 

Unfortunately,neither graph has any indication of uncertainty. The evidence of an effect is not negligible, but it is fairly weak (p-value of 0.04 from 142 people). It’s easy to imagine someone might do an experiment like this and not publish it if they didn’t see the effect they expected, and it’s pretty certain that you wouldn’t be reading about the results if they didn’t see the effect they expected, so it makes sense to be a bit skeptical.

The story goes on to say

These findings have pretty big implications for the workplace. For one, they suggest that the one-size-fits-all 9-to-5 schedule is practically an invitation to ethical lapses.

Even assuming that the effect is real and that lying about a die roll in a psychological experiment translates into unethical behaviour in real life, the findings don’t say much about the ’9-to-5′ schedule. For a start, none of the testing was conducted between 9am and 5pm.

 

Infographic of the month

Alberto Cairo and wtfviz.net pointed me to the infographic on the left, a summary of a residents’ survey from the town of Flower Mound, Texas (near Dallas/Fort Worth airport). The highlight of the infographic is the 3-D piecharts nesting in the tree, ready to hatch out into full-fledged misinformation.

At least, they look like 3-D pie charts at first glance.  When you look more closely, the data are three-year trends in approval ratings for a variety of topics, so pie charts would be even more inappropriate than usual as a display method.  When you look even more closely, you see that that’s ok, because the 3-D ellipses are all just divided into three equal wedges — the data aren’t involved at all.

flower_mound 2014 Citizen Survey Infographic_201407151504422733

The infographic on the right comes from the town government.  It’s much better, especially by the standards of infographics.

If you follow the link, you can read the full survey results, and see that the web page giving survey highlights actually describes how the survey was done — and it was done well.  They sent questionnaires to a random sample of households, got a 35% response rate (not bad, for this sort of thing) and reweighted it based on age, gender, and housing tenure (ie rent, own, etc) to make it more representative.  That’s a better description (and a better survey) than a lot of the ones reported in the NZ media.

 

[update: probably original, higher resolution version, via Dave Bremer.]

July 23, 2014

Human statisticians not obsolete

There’s a website, OnlyBoth.com, that, as it says

Discovers New Insights from Data.
Writes Them Up in Perfect English.
All Automated.

You can test this by asking it for ‘insights’ in some example areas. One area is baseball, so naturally I selected the Seattle Mariners, and 2009, when I still lived in Seattle. OnlyBoth returns several names where it found insights, and I chose ‘Matt Tuiasosopo’ — the most obvious thing about him is that he comes from a famous local football family, but I was interested in what new insight the data revealed.

Matt Tuiasosopo in 2009 was the 2nd-youngest (23 yrs) of the 25 hitters who were born in Washington and played for the Seattle Mariners.

outdone by Matt Tuiasosopo in 2008 (22 yrs).

I don’t think our students need to be too worried yet.

Average and variation

Two graphs from the NZ influenza surveillance weekly update (PDF, via Mark Hanna)

flu-averageflu-varying

Both show that the seasonal epidemic has started.  I think the second graph is more helpful in comparing this year to the past; showing the actual history for a range of years, rather than an average.  This sort of graph could handle a larger number of past years if they were all or mostly in, eg, thin grey lines, perhaps with this year, last year, and the worst recent year in colour.

The other news in the surveillance update is that the flu viruses that have been examined have overwhelming been H1N1 or H3N2, and both these groups are covered in this year’s vaccine.

The self-surveillance world

See anyone you know? (click to embiggen)

cats

 

This is a screenshot from I know where your cat lives, a project at Florida State University that is intended to illustrate the amount of detailed information available from location-tagged online photographs, without being too creepy — just creepy enough.

(via Robert Kosara and Keith Ng)

July 22, 2014

Lack of correlation does not imply causation

From the Herald

Labour’s support among men has fallen to just 23.9 per cent in the latest Herald-DigiPoll survey and leader David Cunliffe concedes it may have something to do with his “sorry for being a man” speech to a domestic violence symposium.

Presumably Mr Cunliffe did indeed concede it might have something to do with his statement; and there’s no way to actually rule that out as a contributing factor. However

Broken down into gender support, women’s support for Labour fell from 33.4 per cent last month to 29.1 per cent; and men’s support fell from 27.6 per cent last month to 23.9 per cent.

That is, women’s support for Labour fell by 4.2 percentage points (give or take about 4.2) and men’s by 3.7 percentage points (give or take about 4.2). This can’t really be considered evidence for a gender-specific Labour backlash. Correlations need not be causal, but here there isn’t even a correlation.

July 14, 2014

Supermoon

Why supermoons aren’t a big deal for earthquakes, based on XKCD

superm_n

Multiple testing, evidence, and football

There’s a Twitter account, @FifNdhs, that has five tweets, posted well before today’s game

  • Prove FIFA is corrupt
  • Tomorrow’s scoreline will be Germany win 1-0
  • Germany will win at ET
  • Gotze will score
  • There will be a goal in the second half of ET

What’s the chance of getting these four predictions right, if the game isn’t rigged?

Pretty good, actually. None of these events is improbable on its own, and  Twitter lets you delete tweets and delete accounts. If you set up several accounts, posted a few dozen tweets on each, describing plausible events, and then deleted the unsuccessful ones, you could easily come up with an implausible-sounding remainder.

Twitter can prove you made a prediction, but it can’t prove you didn’t also make a different one, so it’s only good evidence of a prediction if either the predictions were widely retweeted before they happened, or the event described in a single tweet is massively improbable.

If @FifNdhs had predicted a 7-1 victory for Germany over Brazil in the semifinal, that would have been worth paying attention to. Gotze scoring, not so much.

July 13, 2014

Age/period/cohort voting

From the New York Times, an interactive graph showing how political leanings at different ages have changed over time

vote

Yes, voting preferences for kids are problematic. Read the story (and this link) to find out how they inferred them. There’s more at Andrew Gelman’s blog.

100% accurate medical testing

The Wireless has a story about a fatal disease where there’s an essentially 100% accurate test available.

Alice Harbourne has a 50% chance of Huntington’s Disease. If she gets tested, she will have either a 0% or 100% chance, and despite some recent progress on the mechanism of the disease, there is no treatment.