Posts from January 2012 (29)

January 30, 2012

Global temperatures for 2011

NASA’s annual summary of global temperatures is out.  2011 was not the warmest year on record, it was only ninth, a whole eighth of a degree cooler than last year.  One of the years that beat 2011 wasn’t even in the twenty-first century. [It was 1998.]

January 29, 2012

Bogus polls compared

The Crafar farms decision has inspired multiple bogus polls asking whether it was the right decision, which gives us an opportunity for comparisons.

Current or recent polls include:

  • NZ Herald: 34% in favour of the decision
  • Stuff: 22% in favour
  • Campbell Live: 3% in favour

(I would link, but these polls tend to disappear quickly from web pages. The pre-decision polls already seem to be gone. )

The Stuff and NZ Herald polls both claim about 18000 votes. If this were a real poll, the maximum margin of error would be under 1%. Clearly the actual error is at least 19%, and quite possibly more.

If the true proportion was about 20% (as in the real poll taken late last year) and you had a real poll with a sample size of just ten you would have a 3 in 4 chance of getting within 10% of the true answer.  The chance of having a 30% spread over three polls of size ten would be only 1 in 5.  So, on this issue, the self-selected polls are worse than a random sample of just ten people.  You can see why we like the term ‘bogus’.

Bogus polls are worse than useless because of anchoring bias. Seeing the results is likely to make your beliefs less accurate, even if you know the information content is effectively zero.

 

January 26, 2012

Unfaithful to the data, too.

When I were young, the Serious News Outlets  probably wouldn’t have admitted the existence of extra-marital affairs by non-celebrities, let alone written an article that’s basically advertising from an infidelity website press release.

In some ways the data are better-quality than most advertorials, because the website has complete data on its NZ members.  They have even gone as far as using population sizes for NZ cities to estimate their, um, market penetration, which varied across the five main cities by as much as 0.06%.  No, that doesn’t exceed the margin of error.

The Herald’s article starts off

If your partner supports National, has a PC, drinks Coke, eats meat, has a tattoo, smokes and is a Christian, be warned – they could be a cheater.

Leaving aside the gaping logical chasm in identifying website members as representative of all ‘cheaters’, what the data actually say is that more members support National, not that more National supporters are members.   As you may recall, we determined not so long ago that more New Zealanders of all descriptions support National than any other party, so that’s what you would expect for members of the website.   The proportion of National supporters in the election was 47%, among website members it’s 33%, so National supporters are substantially less likely to be members of the website than supporters of other parties. The proportion identifying as Christian among website members is very similar to the proportion in the 2006 census.   79% of website users are on PC (vs Mac).  Again that’s a lower proportion of PCs than in the population of NZ computers (the Herald said 10% were Macs in July 2010, and for Aus+NZ combined, IDC now says 15%) but one explanation is that Macs have more of the home market than the business market.  More members drinking Coke vs Pepsi is also not surprising — I couldn’t find population figures, but Coke dominates the NZ cola market.

The story doesn’t say, but we can also be pretty confident that the website members are more likely to be Pakeha than Maori, more likely to be accountants than statisticians, and more likely to have a pet cat than a pet camel.

 

Another smoking survey

Today it’s Hamilton’s turn:

A survey of 111 residents at Hamilton Lake and Innes Common playgrounds, the city bus station and Waikato University in mid-2011 found 94 per cent wanted children’s playgrounds to be smokefree.

This is much more sensible than the website poll on Auckland’s initiative that was magically translated into “a majority of New Zealanders”.

The poll will be a biased sample of the population, because it will over-represent people who go to children’s playgrounds, but it’s perfectly reasonable for them to have more say about smoking there.   I assume (I have to assume, because the facts aren’t given) that the survey also asked if the bus station and the University campus should be smoke-free, and that the results were less favorable.

We also aren’t told who did the survey, and what the questions were.   You might get quite different responses for a survey conducted by the Council and one conducted by the Cancer Society.

Even accounting for this, it looks as though there’s a lot of support, and I’d say the poll qualifies as not completely useless.

January 24, 2012

How to read road-toll statistics

Chris Triggs of the Department of Statistics at The University of Auckland was on Graeme Hill’s Radio Live show in the weekend to explain how we should read road-toll statistics. Remember … it’s not ‘killer roads’, it’s a random-chance phenomenon. Start at 24 mins 45 sec.

Gold leaf on your food, madam?

Golf leaf on food might seem terribly exotic – until you see the numbers. See Thomas Lumley in today’s Sideswipe column in the New Zealand Herald.

[update: and here’s the post that Sideswipe references]

 

January 21, 2012

Eggs for breakfast

Earlier in the week I complained that the Egg Foundation and the Herald were over-interpreting a lab study of mouse brain cells.  The study was a perfectly reasonable, and probably technically difficult, piece of basic biological research.  It’s the sort of research that answers the question “By what mechanisms might different foods affect brain function differently?”.   It doesn’t answer the question “What’s for breakfast?”.

If you wanted to know whether a high-protein breakfast such as eggs really increases alertness there are at least two ways to set up a relevant study.    The first would be an open-label randomized comparison of eggs and something else; the second would be a double-blind study of high-protein and high-carbohydrate versions of the same breakfast.  In both cases, you recruit people and randomly allocate them to higher-protein breakfasts on some days and lower-protein on other days.

In an open-label study you have to be careful to minimise response bias, so you would tell participants, truthfully, that some people think protein for breakfast is better and others think complex carbohydrates are better.  You would have to be careful not to indicate what you believed,  and it would be a good idea to measure some addition information beyond alertness, such as hunger, what people ended up eating for lunch.   There’s always some potential for bias, and one strategy is to ask participants about something that you don’t expect to be affected, like headaches.  This strategy was used in the home heating randomized trial that underlies the government’s ‘warm home’ advertising, which found that asthma was reduced by better heating, but twisted ankles were not.

In a blinded version of the study, you might recruit muesli eaters and, perhaps with the help of a cereal manufacturer, randomize them to higher-protein and lower-protein versions of breakfast.  This would be a bit more expensive, but perfectly feasible.  There would be less risk of reporting bias, since neither the participant nor the people recording the data would know whether the meals were higher or lower in protein on a particular day.  At the end of the study, you unmask the breakfasts and compare alertness.   The main disadvantage of this approach is the same as its main advantage — you learn about higher-protein vs lower-protein muesli, and have to make some assumptions to generalize this to eggs vs cereal or toast.

If it really mattered whether eggs for breakfast increased alertness, these studies would be worth doing.  But the Egg Foundation is unlikely to be interested, since it wouldn’t benefit from knowing the facts.  The mouse brain study is enough of a fig-leaf to let the claim stand up in public, and they don’t want to risk finding out that it doesn’t have any clothes.

 

January 20, 2012

Predicting whether you’ll live to 100.

From the Herald

Scientists are claiming a genetic test can predict whether someone will live to 100 years old.

The study…claims to be able to predict exceptional longevity with 60 to 85 percent accuracy, depending on the subject’s age.

You can read the paper, which is in the open-access journal PLoS One.

Whether the prediction really works comes down in part to what you mean by “60 to 85% accuracy”.  There’s a very easy way to predict whether someone will live to 100 years old, with better than 99% accuracy.  Ask them if they are over 100. If they say “Yes”, predict “Yes”; if they say “No”, predict “No”.  Since almost no-one lives to be 100 you will almost always be right.

The new test is not as useless as this, but it still isn’t terribly accurate.  Distinguishing people who live to 90 from those who live to 100, the test gets the correct prediction for about  half of the centenarians and for about two-thirds of the non-centenarians.  You could probably predict that well in 90+ year olds by asking them how their health is, and whether they can get around on their own.  The ability to predict survival to 105 among 100-year-olds is slightly better, but again, probably not as accurate as you could get more easily from health information.  The point of the paper isn’t really prediction. It’s to find genes that are connected with longevity, which are still not well understood, and the reason for talking about prediction is to make the point that genetic variations do matter in extreme old age.  Even from this point of view the results are a bit over-sold, since the biggest component of the genetics is a well-known gene, APO E, where commercial testing has been (controversially) available for years.

This study has attracted a lot of media attention around the world. Some stories mentioned this note from the journal editors:

While we recognize that aspects of this study will attract attention owing to the history and the strong claims made in the paper, the handling editor, Greg Gibson, made the decision that publication is warranted, balancing the extensive peer review and the spirit of PLoS ONE to allow important new results and approaches to be available to the scientific community so long as scientific standards have been met.  We trust that publication will facilitate full evaluation of the study.

Others didn’t.

Bogus smoking poll.

From the NZ Herald

Auckland councillors are divided over a proposed smoking ban in public outdoor areas, but the majority of New Zealanders say the idea is either sensible or good in theory.

If you read the article, it turns out that the claim about the majority of New Zealanders is based on the clicky poll on the Herald website.  That is, the data come from what the newspapers ordinarily call “an unscientific poll”, and we at StatsChat prefer to call “a bogus poll“.   Last week I criticised the Drug Foundation online poll results as ‘dodgy numbers’.  This is well beyond ‘dodgy’.

In the Drug Foundation poll, the point was that a non-negligible fraction of people believed drug driving was safe, and the poll provided at least some support for the argument even if the numbers were unreliable.  And the Drug Foundation collected a lot of demographic information so it was possible to say something about the ways in which the sample was biased.

In this example it really matters whether the support is, say , 40% or 70%, and we have no idea of the extent of the bias, except that there are probably responses from people outside Auckland.

If a ban on smoking in public outdoor areas had sufficiently strong majority support (perhaps 2/3 majority), I wouldn’t necessarily be against it, but we need real numbers, based on real opinions of a concrete plan.

January 19, 2012

Stay a lert. NZ needs lerts.

Q: Why does the Egg Foundation think eggs are a good breakfast food?

A: Well, the name “Egg Foundation” is a bit  of a hint. Just saying.

Q: No, why do they say that new research says eggs are a good breakfast food?

A: Because it sounds like a good excuse to advertise eggs?

Q: You know what I mean.

A: Sorry, got a bit carried away there.

Q: Well?

A: You are referring to the article about eggs increasing alertness.

Q: Yes. I assume it wasn’t a randomized trial.  Did they ask people about breakfast and alertness?

A: Not people, exactly

Q: Ah.  So it was a mouse study. They fed the mice different things and observed their alertness?

A: Not quite. It was a study of brain cells.

Q: Where did they find people to give them the brain cells?

A: That’s where the mice come in.

Q: So, what did they actually do?

A: They took brain slices from transgenic mice, modified so the cells lit up green when the right chemical pathways were activated, then they added sugar or amino acids to the slices.

Q: And what did they find?

A: Some brain cells, which are known to be involved in both feeding and in alertness, were active when they used amino acids but not when they used sugar.  They also repeated the experiment by feeding the mice before taking the brain slices, to see if they saw the same effects (and they did).

Q: So what does this tell us about eggs for breakfast?

A: Not a whole lot, actually.  Unless you’re a dead mouse.

Q: Why didn’t you complain about the lack of links to the original research?

A: Well, if a proposed US law passes (no, not the well-known one), there won’t be any original research available on the web to link to, so we might as well get used to it.  [NYT, Guardian, Scientific American, The Atlantic, Wiredothers]