March 23, 2017

Democracy is coming

We have an election this year, so we are starting to have polling.

To save time, here are some potentially useful StatsChat posts about election polls:

  • A simple cheatsheet for working out the margin of error for minor parties (also including a simple Excel macro)
avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar

    A quick thought on election polls. Has anyone measured fluxes from one party to another? E.g. if, say, National is on 45% and Labour on 30% for two polls in a row (ignoring uncertainty), it’s probably largely the same 45% and 30% of the population, but not exactly. All that would be known for sure based on the two polls is that the transition rates satisfy detailed balance.

    Does anyone try to measure the turnover rather than overall proportions?

    7 years ago

    • avatar
      Thomas Lumley

      In the US, there are election panel surveys that I think do this. I don’t think any of the big NZ surveys get the same people each time, so it probably wouldn’t be feasible.

      7 years ago

      • avatar
        steve curtis

        The US election panels, yes that was the famous one from LA Times which was the outlier and predicted a Trump win ( of course the others will say they were withing the margin of error)
        They say they predicted the previous Obama win as well.
        Is the end of the random phone poll nigh ?

        7 years ago

        • avatar
          Thomas Lumley

          Yes, if you’re not willing to consider anything other than point estimates you will find polls pretty useless.

          This isn’t new, it has always been the case, and it’s why right from the start StatChat has gone on and one about margin of error.

          7 years ago

    • avatar
      Peter Green

      I’m pretty sure Colmar Brunton did this in 2014, but I can’t find the report on their new website.

      7 years ago

  • avatar
    Leon Iusitini

    In his chapter in the book ‘Kicking the Tyres’ (2012, ed. Johansson and Levine), Rob Salmond compared the accuracy of each of the four main telephone pollsters’ last poll before the 2011 election (based on their ‘root mean squared errors’) and found Colmar Brunton/TVNZ had the lowest error (about 1.4 from eyeing his graph), and Roy Morgan and Reid Research the highest equal (about 2.2).Can we read anything into this? Is there evidence to say one pollster is ‘better’ than the others?

    7 years ago

    • avatar
      Leon Iusitini

      *accuracy compared to the actual election results, that is.

      7 years ago

    • avatar
      Thomas Lumley

      That’s the right sort of evidence, but there isn’t enough of it in one election, and I don’t think the 2014 elections showed the same error rankings.

      What’s clear is that Roy Morgan results, which are widely thought to be much more volatile than the others, really aren’t. Accuracy is harder, because there isn’t enough ground truth. Things are easier in the US.

      7 years ago