Posts from March 2013 (75)

March 18, 2013

Stat of the Week Competition: March 16 – 22 2013

Each week, we would like to invite readers of Stats Chat to submit nominations for our Stat of the Week competition and be in with the chance to win an iTunes voucher.

Here’s how it works:

  • Anyone may add a comment on this post to nominate their Stat of the Week candidate before midday Friday March 22 2013.
  • Statistics can be bad, exemplary or fascinating.
  • The statistic must be in the NZ media during the period of March 16 – 22 2013 inclusive.
  • Quote the statistic, when and where it was published and tell us why it should be our Stat of the Week.

Next Monday at midday we’ll announce the winner of this week’s Stat of the Week competition, and start a new one.

(more…)

Stat of the Week Competition Discussion: March 16 – 22 2013

If you’d like to comment on or debate any of this week’s Stat of the Week nominations, please do so below!

March 17, 2013

To trend or not to trend

David Whitehouse through the Global Warming Policy Foundation has recently released a report stating that “It is incontrovertible that the global annual average temperature of the past decade, and in some datasets the past 15 years, has not increased”. In case it is unclear, both the author and institute are considered sceptics of man-made climate change.

The report focuses on arguing the observation that if you look at only the past decade then there is no statistically significant change in global average annual temperature. Understanding what this does, or doesn’t, mean requires considering two related statistical concepts; 1) significant versus non-significant effects and 2) sample size and power.

Detecting a change is not the same as detecting no change. Statistical tests, indeed most of science, generally operates around Karl Popper’s falsification. Null hypotheses are set-up, generally a statement or test of the form ‘there is no effect’ and the alternative hypothesis is set-up in the contrary ‘there is an effect’. We then set-out to test these competing hypotheses. What is important to realise, however, is that technically one can never prove the null hypothesis, only gather evidence against it. In contrast however one can prove the alternative hypothesis. Scientists generally word their results VERY precisely. As a common example, imagine we want to show there are no sharks in a bay (our null hypothesis). We do some surveys, and eventually one finds a shark. Clearly our null hypothesis has been falsified, as finding a shark proves that there are sharks in the bay. However, let’s say we do a number of surveys, say 10, and find no sharks. We don’t have any evidence against our null hypothesis (i.e. we haven’t found any sharks..yet), but we haven’t ‘proven’ there are no sharks, only that we looked and didn’t find any. What if we increased it to say 100 surveys? That might be more convincing, but once again we can never prove there are no sharks, only demonstrate that after a large number of surveys (even 1,000, or 1,000,000) its highly unlikely there are any. In other words, as we increase our sample size, we have more ‘power’ (a statistical term) to be confident that they represent the underlying truth.

And so in the case of David Whitehouse’s claim we see similar elements. Just because an analysis of the last decade of global temperatures does not find a statistically significant trend, does not prove there is none. It may mean there has been no change, but it might also mean that the dataset is not large enough to detect it (i.e. there is not enough power). Furthermore, by reducing your dataset (i.e. only looking at the last 10 years rather than 30) you are reducing your sample size, meaning you are MORE likely NOT to detect an effect. A cunning statistical sleight of hand to make evidence of a trend disappear.

I lecture these basic statistical concepts to my undergraduate class and demonstrate it graphically. If you put a line over any ten years of data, it probably could be flat, only once you accumulate enough data, say thirty years, does the extent of the trend become clear.

Arctic sea ice 1979-2009

Demonstrates the difficulty in detecting long-term trends with noisy data

This point is actually noted by the report (e.g. see Fig. 16).

Essentially, the only point that the report makes is that if you look at a small part of the dataset (less than a few decades), you can’t make a statistically robust conclusion, since you will be within a low power margin of error. Most importantly, we must be able to detect trends early even when the power to detect them may be low. And as I have stated in earlier posts, changes in variability are as important a metric as changes in the average, and the former, which is predicted from climate change, will make detecting the latter, which is also predicted, even more difficult.

Briefly

  • When data gets more important, there’s more incentive to fudge it.  From the Telegraph: ” senior NHS managers and hospital trusts will be held criminally liable if they manipulate figures on waiting times or death rates.”
  • A new registry for people with rare genetic diseases, emphasizing the ability to customise what information is revealed and to whom.
  • Wall St Journal piece on Big Data. Some concrete examples, not just the usual buzzwords
  • Interesting visualisations from RevDanCat
March 16, 2013

Where survey stories come from

The ‘flack:hack’ ratio, the ratio of PR professionals to journalists, has been steadily increasing over time.  This graph, from the Economist, shows that the ratio in the US has  now reached 9:1.

20110521_wbc750

 

As Felix Salmon says

for every professional journalist, there are nine people, some of them extremely well paid, trying to persuade that journalist to publish something about a certain company. That wouldn’t be the case if those articles weren’t worth serious money to the companies in question.

This explains a lot of ‘survey’ stories.  A no-frills 1000-person survey is not only cheaper than a half-page news-section ad in the Herald, if it gets a story it’s a lot more effective.  A story will be syndicated to the regional papers, it will be on the online site for ever, and we’re much more likely to read and trust it.

Do scientists read newspapers or blogs?

A new paper surveyed neuroscientists in Germany and the US about where they get information on science-related news stories.

Based on the response of some 250 scientists (fairly evenly divided between the countries), the researchers found that scientists tended to give more weight to the influence of traditional media. For instance, more than 90 percent of neuroscientists in both countries said they relied on traditional journalist sources – both in print and online – to follow news about scientific events compared to around 20 percent for blogs.

Not surprisingly, the internet coverage of this paper has been fairly hostile (traditional media seems not to have covered it).

There’s a good  summary of the reaction by science writer Deborah Blum, but count me on the bemused side.  I do use traditional media to learn that particular science stories exist, but rarely to find out more about them.

March 15, 2013

Policing the pollsters … your input sought

This is from Kiwiblog:

A group of New Zealand’s leading political pollsters, in consultation with other interested parties, have developed draft NZ Political Polling Guidelines.

The purpose is to ensure that Association of Market Research Organisations and Market Research Society of New Zealand members conducting political polls, and media organisations publishing poll results, adhere to the highest “NZ appropriate” standards. The guidelines are draft and comments, questions and recommendations back to the working group are welcome.

This code seeks to document best practice guidelines for the conducting and reporting of political polls in New Zealand. It is proposed that the guidelines, once approved and accepted, will be binding on companies that are members of  AMRO and on researchers that are members of MRSNZ.

Briefly

Better evidence in education

There’s a new UK report by Ben Goldacre, “Building Evidence into Education”, which has been welcomed by the Teacher Development Trust

Part of the introduction is worth quoting in detail:

Before we get that far, though, there is a caveat: I’m a doctor. I know that outsiders often try to tell teachers what they should do, and I’m aware this often ends badly. Because of that, there are two things we should be clear on.

Firstly, evidence based practice isn’t about telling teachers what to do: in fact, quite the opposite. This is about empowering teachers, and setting a profession free from governments, ministers and civil servants who are often overly keen on sending out edicts, insisting that their new idea is the best in town. Nobody in government would tell a doctor what to prescribe, but we all expect doctors to be able to make informed decisions about which treatment is best, using the best currently available evidence. I think teachers could one day be in the same position.

Secondly, doctors didn’t invent evidence based medicine. In fact, quite the opposite is true: just a few decades ago, best medical practice was driven by things like eminence, charisma, and personal experience. We needed the help of statisticians, epidemiologists, information librarians, and experts in trial design to move forwards. Many doctors – especially the most senior ones – fought hard against this, regarding “evidence based medicine” as a challenge to their authority.

In retrospect, we’ve seen that these doctors were wrong. The opportunity to make informed decisions about what works best, using good quality evidence, represents a truer form of professional independence than any senior figure barking out their opinions. A coherent set of systems for evidence based practice listens to people on the front line, to find out where the uncertainties are, and decide which ideas are worth testing. Lastly, crucially, individual judgement isn’t undermined by evidence: if anything, informed judgement is back in the foreground, and hugely improved.

This is the opportunity that I think teachers might want to take up.

Pi(e) day

Today in the USA is National Pi Day (they write their dates funny, and they’re a day behind, so it’s 3.14.2013 there). The Washington Post has a set of the best and worst pie charts to celebrate.  Many of them are classics, but there was one I hadn’t seen before, showing that it’s possible to do even worse with a pie chart of a bogus poll than the Herald-Sun does.

pie9