Posts filed under Research (190)

June 23, 2016

Or the other way around

It’s a useful habit, when you see a causal claim based on observational data, to turn the direction around: the story says A causes B, but could B cause A instead? People get annoyed when you do this, because they think it’s silly. Sometimes, though, that is what is happening.

As a pedestrian and public transport user, I’m in favour of walkable neighbourhoods, so I like seeing research that says they are good for health. Today, Stuff has a story that casts a bit of doubt on those analyses.

The researchers used Utah driver’s-licence data, which again included height and weight, to divide all the neighbourhoods in Salt Lake County into four groups by average body mass index. They used Utah birth certificates, which report mother’s height and weight, and looked at 40,000 women who had at least two children while living in Salt Lake County during the 20-year study period.  Then they looked at women who moved from one neighbourhood to another between the two births. Women with higher BMI were more likely to  move to a higher-BMI neighbourhood.

If this is true in other cities and for people other than mothers with new babies, it’s going to exaggerate the health benefits of walkable neighbourhoods: there will be a feedback loop where these neighbourhoods provide more exercise opportunity, leading to lower BMI, leading to other people with lower BMI moving there.   It’s like with schools: suppose a school starts getting consistently good results because of good teaching. Wealthy families who value education will send their kids there, and the school will get even better results, but only partly because of good teaching.

June 22, 2016

Making hospital data accessible

From the Guardian

The NHS is increasingly publishing statistics about the surgery it undertakes, following on from a movement kickstarted by the Bristol Inquiry in the late 1990s into deaths of children after heart surgery. Ever more health data is being collected, and more transparent and open sharing of hospital summary data and outcomes has the power to transform the quality of NHS services further, even beyond the great improvements that have already been made.

The problem is that most people don’t have the expertise to analyse the hospital outcome data, and that there are some easy mistakes to make (just as with school outcome data).

A group of statisticians and psychologists developed a website that tries to help, for the data on childhood heart surgery.  Comparisons between hospitals in survival rate are very tempting (and newsworthy) here, but misleading: there are many reasons children might need heart surgery, and the risk is not the same for all of them.

There are two, equally important, components to the new site. Underneath, invisible to the user, is a statistical model that predicts the surgery result for an average hospital, and the uncertainty around the prediction. On top is the display and explanation, helping the user to understand what the data are saying: is the survival rate at this hospital higher (or lower) than would be expected based on how difficult their operations are?

May 20, 2016

Depends who you ask

There’s a Herald story about sleep

A University of Michigan study using data from Entrain, a smartphone app aimed at reducing jetlag, found Kiwis on average go to sleep at 10.48pm and wake at 6.54am – an average of 8 hours and 6 minutes sleep.

It quotes me as saying the results might not be all that representative, but it just occurred to me that there are some comparison data sets for the US at least.

  • The Entrain study finds people in the US go to sleep on average just before 11pm and wake up on average between 6:45 and 7am.
  • SleepCycle, another app, reports a bedtime of 11:40 for women and midnight for men, with both men and women waking at about 7:20.
  • The American Time Use Survey is nationally representative, but not that easy to get stuff out of. However, Nathan Yau at Flowing Data has an animation saying that 50% of the population are asleep at 10:30pm and awake at 6:30am
  • And Jawbone, who don’t have to take anyone’s word for whether they’re asleep, have a fascinating map of mean bedtime by county of the US. It looks like the national average is after 11pm, but there’s huge variation, both urban-rural and position within your time zone.

These differences partly come from who is deliberately included and excluded (kids, shift workers, the very old), partly from measurement details, and partly from oversampling of the sort of people who use shiny gadgets.

May 6, 2016

Reach out and touch someone

Q: Did you see in the Herald that texting doesn’t help relationships?

A: That’s what they said, yes.

Q: And is it what they found?

A: Hard to tell. There aren’t any real descriptions of the results

Q: What did they do?

A: Well, a couple of years ago, the researcher had a theory that “sending just one affectionate text message a day to your partner could significantly improve your relationship.”

Q: So the research changed her mind?

A: Sounds like.

Q: That’s pretty impressive, isn’t it?

A: Yes, though it doesn’t necessary mean it should change our mind.

Q: It sounds like a good study, though. Enrol some people and regularly remind half of them to send affectionate text messages.

A: Not what they did

Q: They enrolled mice?

A: I don’t think there are good animal models for assessing affectionate text messages. Selfies, maybe.

Q: Ok, so that publicity item about the research is headlined “Could a text a day keep divorce away?”

A: Yes.

Q: Did they people about their text-messaging behaviour and then wait to see who got divorced?

A: It doesn’t look like it.

Q: What did they do?

A: It’s not really clear: there are no details in the Herald story or in the Daily Mail story they took it from.  But they were recruiting people for an online survey back in 2014.

Q: A bogus poll?

A: Well, if you want to put it that way, yes. It’s not as bogus when you’re trying to find out if two things are related rather than how common one thing is.

Q: <dubiously> Ok . And then what?

A: It sounds like they interviewed some of the people, and maybe asked them about the quality of their relationships. And that people who didn’t see their partners or who didn’t get affection in person weren’t as happy even if they got a lot of texts.

Q: Isn’t that what you’d expect anyway? I mean, even if the texts made a huge difference, you’d still wish that you had more time together or that s/he didn’t stop being affectionate when they got off the phone.

A: Pretty much. The research might have considered that, but we can’t tell from the news story. There doesn’t even seem to be an updated press release, let alone any sort of publication.

Q: So people shouldn’t read this story and suddenly stop any social media contact with their sweetheart?

A: No. That was last week’s story.


April 18, 2016

Being precise


There are stories in the Herald about home buyers being forced out of Auckland by house prices, and about the proportion of homes in other regions being sold to Aucklanders.  As we all know, Auckland house prices are a serious problem and might be hard to fix even if there weren’t motivations for so many people to oppose any solution.  I still think it’s useful to be cautious about the relevance of the numbers.

We don’t learn from the story how CoreLogic works out which home buyers in other regions are JAFAs — we should, but we don’t. My understanding is that they match names in the LINZ title registry.  That means the 19.5% of Auckland buyers in Tauranga last quarter is made up of three groups

  1. Auckland home owners moving to Tauranga
  2. Auckland home owners buying investment property in Tauranga
  3. Homeowners in Tauranga who have the same name as a homeowner in Auckland.

Only the first group is really relevant to the affordability story.  In fact, it’s worse than that. Some of the first group will be moving to Tauranga just because it’s a nice place to live (or so I’m told).  Conversely, as the story says, a lot of the people who are relevant to the affordability problem won’t be included precisely because they couldn’t afford a home in Auckland.

For data from recent years the problem could have been reduced a lot by some calibration to ground truth: contact people living at a random sample of the properties and find out if they had moved from Auckland and why.  You might even be able to find out from renters if their landlord was from Auckland, though that would be less reliable if a property management company had been involved.  You could do the same thing with a sample of homes owned by people without Auckland-sounding names to get information in the other direction.  With calibration, the complete name-linkage data could be very powerful, but on its own it will be pretty approximate.


April 17, 2016

Evil within?

The headlineSex and violence ‘normal’ for boys who kill women in video games: study. That’s a pretty strong statement, and the claim quotes imply we’re going to find out who made it. We don’t.

The (much-weaker) take-home message:

The researchers’ conclusion: Sexist games may shrink boys’ empathy for female victims.

The detail:

The researchers then showed each student a photo of a bruised girl who, they said, had been beaten by a boy. They asked: On a scale of one to seven, how much sympathy do you have for her?

The male students who had just played Grand Theft Auto – and also related to the protagonist – felt least bad for her. with an empathy mean score of 3. Those who had played the other games, however, exhibited more compassion. And female students who played the same rounds of Grand Theft Auto had a mean empathy score of 5.3.

The important part is between the dashes: male students who related more to the protagonist in Grand Theft Auto had less empathy for a female victim.  There’s no evidence given that this was a result of playing Grand Theft Auto, since the researchers (obviously) didn’t ask about how people who didn’t play that game related to its protagonist.

What I wanted to know was how the empathy scores compared by which game the students played, separately by gender. The research paper didn’t report the analysis I wanted, but thanks to the wonders of Open Science, their data are available.

If you just compare which game the students were assigned to (and their gender), here are the means; the intervals are set up so there’s a statistically significant difference between two groups when their intervals don’t overlap.


The difference between different games is too small to pick out reliably at this sample size, but is less than half a point on the scale — and while the ‘violent/sexist’ games might reduce empathy, there’s just as much evidence (ie, not very much) that the ‘violent’ ones increase it.

Here’s the complete data, because means can be misleading


The data are consistent with a small overall impact of the game, or no real impact. They’re consistent with a moderately large impact on a subset of susceptible men, but equally consistent with some men just being horrible people.

If this is an issue you’ve considered in the past, this study shouldn’t be enough to alter your views much, and if it isn’t an issue you’ve considered in the past, it wouldn’t be the place to start.

March 24, 2016

The fleg

Two StatsChat relevant points to be made.

First, the opinion polls underestimated the ‘change’ vote — not disastrously, but enough that they likely won’t be putting this referendum at the top of their portfolios.  In the four polls for the second phase of the referendum after the first phase was over, the lowest support for the current flag (out of those expressing an opinion) was 62%. The result was 56.6%.  The data are consistent with support for the fern increasing over time, but I wouldn’t call the evidence compelling.

Second, the relationship with party vote. The Herald, as is their wont, have a nice interactive thingy up on the Insights blog giving results by electorate, but they don’t do party vote (yet — it’s only been an hour).  Here are scatterplots for the referendum vote and main(ish) party votes (the open circles are the Māori electorates, and I have ignored the Northland byelection). The data are from here and here.


The strongest relationship is with National vote, whether because John Key’s endorsement swayed National voters or whether it did whatever the opposite of swayed is for anti-National voters.

Interestingly, given Winston Peters’s expressed views, electorates with higher NZ First vote and the same National vote were more likely to go for the fern.  This graph shows the fern vote vs NZ First vote for electorates divided into six groups based on their National vote. Those with low National vote are on the left; those with high National vote are on the right. (click to embiggen).

There’s an increasing trend across panels because electorates with higher National vote were more fern-friendly. There’s also an increasing trend within each panel, because electorates with similar National vote but higher NZ First vote were more fern-friendly.  For people who care, yes, this is backed up by the regression models.


Two cheers for evidence-based policy

Daniel Davies has a post at the Long and Short and a follow-up post at Crooked Timber about the implications for evidence-based policy of non-replicability in science.

Two quotes:

 So the real ‘reproducibility crisis’ for evidence-based policy making would be: if you’re serious about basing policy on evidence, how much are you prepared to spend on research, and how long are you prepared to wait for the answers?


“We’ve got to do something“. Well, do we? And equally importantly, do we have to do something right now, rather than waiting quite a long time to get some reproducible evidence? I’ve written at length, several times, in the past, about the regrettable tendency of policymakers and their advisors to underestimate a number of costs; the physical deadweight cost of reorganisation, the stress placed on any organisation by radical change, and the option value of waiting. 

March 9, 2016

Not the most literate?

The Herald (and/or the Otago Daily Times) say

 New Zealand is the fifth most literate country in the world.


New Zealand ranked higher than Germany (9), Canada (10), the US (11), UK (14) and Australia (15).

Newshub had a similar story and the NZEI welcomed the finding.  One of the nice things about the Herald story is it provides a link. If you follow that link, the ratings look a bit different.


There are five other rankings in addition to the “Final Rank”, but none of them has NZ at number five.


So, where did the numbers come from? It can’t be a mistake at the Herald, because Newshub had the same numbers (as did Finland Todayand basically everyone except the Washington Post)

Although nobody links, I did track down the press release. It has the ranks given by the Herald, and it has the quotes they used from the creator of the ranking.  The stories would have been written before the site went live, so the reporters wouldn’t have been able to check the site even if it had occurred to them to do so.  I have no idea how the press release managed to disagree with the site itself, and while it would be nice to see corrections published, I won’t hold my breath.


Underlying this relatively minor example is a problem with the intersection of ‘instant news’ and science that I’ve mentioned before.  Science stories are often written before the research is published, and often released before it is published. This is unnecessary except for the biggest events: the science would be just as true (or not) and just as interesting (or not) a day later.

At least the final rank still shows NZ beating Australia.

February 28, 2016

How I met your mother

Via Jolisa Gracewood on Twitter, a graph from Stanford sociologist Michael Rosenfeld on how people met their partners (click to embiggen)


Obviously the proportion who met online has increased — in the old days there weren’t many people on line. It’s still dramatic how fast the change happened, considering that ‘the year September never ended’, when AOL subscribers gained access to Usenet, was only 1993.  It’s also notable how everything else except ‘in a bar or restaurant’ has gone down.

Since this is StatsChat you should be asking how they got the data: it was a reasonably good survey. There’s a research paper, too (PDF).

You should also be worrying about the bump in ‘online’ in the mid-1980s. It’s ok. The paper says “This bump corresponds to two respondents. These two respondents first met their partners in the 1980s without the assistance of the Internet, and then used the Internet to reconnect later”