Posts filed under Polls (132)

October 17, 2016

Vote takahē for Bird of the Year

It’s time again for the only bogus poll that StatsChat endorses: the New Zealand Bird of the Year.

Why is Bird of the Year ok?

  • No-one pretends the result means anything real about popularity
  • The point of the poll is just publicity for the issue of bird conservation
  • Even so, it’s more work to cheat than for most bogus polls

takahe

Why takahē?

  • Endangered
  • Beautiful (if dumb)
  • Very endangered
  • Unusual even by NZ bird standards: most of their relatives (the rail family) are shy little waterbirds.

sora

(A sora, a more-typical takahē relative, by/with ecologist Auriel ‘@RallidaeRule’ Fournier)

August 6, 2016

Momentum and bounce

Momentum is an actual property of physical objects, and explanations of flight, spin, and bounce in terms of momentum (and other factors) genuinely explain something.  Electoral poll proportions, on the other hand, can only have ‘momentum’ or ‘bounce’ as a metaphor — an explanation based on these doesn’t explain anything.

So, when US pollsters talk about convention bounce in polling results, what do they actually mean? The consensus facts are that polling results improve after a party’s convention and that this improvement tends to be temporary and to produce polling results with a larger error around the final outcome.

Andrew Gelman and David Rothschild have a long piece about this at Slate:

Recent research, however, suggests that swings in the polls can often be attributed not to changes in voter intention but in changing patterns of survey nonresponse: What seems like a big change in public opinion turns out to be little more than changes in the inclinations of Democrats and Republicans to respond to polls. 

As usual, my recommendation is the relatively boring 538 polls-plus forecast, which discounts the ‘convention bounce’ very strongly.

July 31, 2016

Lucifer, Harambe, and Agrabah

Public Policy Polling has a history of asking … unusual… questions in their political polls.  For example, asking if you are in favour of bombing Agrabah (the fictional country of Disney’s Aladdin), whether you think Hillary Clinton has ties to Lucifer, and whether you would vote for Harambe (the dead, 17-yr old gorilla) if running as an independent against Trump and Clinton.

From these three questions, the Lucifer one stands out: it comes from a familiar news issue and isn’t based on tricking the respondents. People may not answer honestly, but at least they know roughly what they are being asked and how it’s likely to be understood.  Since they know what they are being asked, it’s possible to interpret the responses in a reasonably straightforward way.

Now, it’s fairly common when asking people (especially teenagers) about drug use to include some non-existent drugs for an estimate of the false-positive response rate.  It’s still pretty clear how to interpret the results: if the name is chosen well, no respondents will have a good-faith belief that they have taken a drug with that name, but they also won’t be confident that it’s a ringer.  You’re not aiming to trick honest respondents; you’re aiming to detect those that aren’t answering honestly.

The Agrabah question is different. There had been extensive media discussion of the question of bombing various ISIS strongholds (eg Raqqa), and this was the only live political question about bombing in the Middle East. Given the context of a serious opinion poll, it would be easy to have a good-faith belief that ‘Agrabah’ was the name of one of these ISIS strongholds and thus to think you were being asked whether bombing ISIS there was a good idea. Because of this potential confusion, we can’t tell what the respondents actually meant — we can be sure they didn’t support bombing a fictional city, but we can’t tell to what extent they were recklessly supporting arbitrary Middle-Eastern bombing versus just being successfully trolled. Because we don’t know what respondents really meant, the results aren’t very useful.

The Harambe question is different again. Harambe is under the age limit for President, from the wrong species, and dead, so what could it even mean for him to be a candidate?  The charitable view might be that Harambe’s 5% should be subtracted from the 8-9% who say they will vote for real, living, human candidates other than Trump and Clinton. On the other hand, that interpretation relies on people not recognising Harambe’s name — on almost everyone not recognising the name, given that we’re talking about 5% of responses.  I can see the attraction of using a control question rather than a half-arsed correction based on historical trends. I just don’t believe the assumptions you’d need for it to work.

Overall, you don’t have to be very cynical to suspect the publicity angle might have some effect on their question choice.

July 27, 2016

In praise of NZ papers

I whinge about NZ papers a lot on StatsChat, and even more about some of the UK stories they reprint. It’s good sometimes to look at some of the UK stories they don’t reprint.  From the Daily Express

express

The Brexit enthusiast and cabinet Minister John Redwood says “The poll is great news, well done to the Daily Express.” As he seems to be suggesting, you don’t get results like this just by chance — having an online bogus poll on the website of an anti-Europe newspaper is a good start.

(via Antony Unwin)

July 19, 2016

Polls over petitions

I mentioned in June that Generation Zero were trying to crowdfund an opinion poll on having a rail option in the Auckland’s new harbour crossing.

Obviously they’re doing this because they think they know what the answer will be, but it’s still a welcome step towards evidence-based lobbying.

The results are out, in a poll conducted by UMR. Well, a summary of the results is out, in a story at The Spinoffand we can hope the rest of the information turns up on Generation Zero’s website at some point. A rail crossing is popular, even when its cost is presented as part of the question:

HarbourCrossingGraph

The advantage of proper opinion polls over petitions or other sort of bogus polls is the representativeness.  If 50,000 people sign a petition, all you know is that the true number of supporters is at least 50,000 (and maybe not even that).  Sometimes there will be one or two silent supporters for each petition vote (as with Red Peak); sometimes many more; sometimes fewer.

Petitions do have the advantage that you feel as if you’re doing something when you sign, but we can cope without that: after all, we still have social media.

May 24, 2016

Microplummeting

Headline: “Newshub poll: Key’s popularity plummets to lowest level”

Just 36.7 percent of those polled listed the current Prime Minister as their preferred option — down 1.6 percent — from a Newshub poll in November.

National though is steady on 47 percent on the poll — a drop of just 0.3 percent — and similar to the Election night result.

So, apparently, 0.3% is “steady” and 1.6% is a “plummet”.

The reason we quote ‘maximum margin of error’, even though it’s a crude summary, not a good way to describe evidence, underestimates variability, and is a terribly misleading phrase, is that it at least gives some indication of what is worth headlining.  The maximum margin of error for this poll is 3%, but the margin of error for a change is 1.4 times higher, about 4.3%.

That’s the maximum margin of error, for a 50% true value, but it doesn’t make that much difference– I did a quick simulation to check. If nothing happened, the Prime Minister’s measured popularity would plummet or soar by more than 1.6% between two polls about half the time purely from sampling variation.

 

April 28, 2016

Marking beliefs to market

Back in August, I wrote

Trump’s lead isn’t sampling error. He has an eleven percentage point lead in the poll averages, with sampling error well under one percentage point. That’s better than the National Party has ever managed. It’s better than the Higgs Boson has ever managed.

Even so, no serious commentator thinks Trump will be the Republican candidate. It’s not out of the question that he’d run as an independent — that’s a question of individual psychology, and much harder to answer — but he isn’t going to win the Republican primaries.

Arguably that was true: no serious commentator, as far as I know, did think Trump would be the Republican candidate.  But he is going to win the Republican primaries, and the opinion polls haven’t been all that badly wrong about him — better than the experts.

March 11, 2016

Getting to see opinion poll uncertainty

Rock’n Poll has a lovely guide to sampling uncertainty in election polls, guiding you step by step to see how approximate the results would be in the best of all possible worlds. Highly recommended.

Of course, we’re not in the best of all possible worlds, and in addition to pure sampling uncertainty we have ‘house effects’ due to different methodology between polling firms and ‘design effects’ due to the way the surveys compensate for non-response.  And on top of that there are problems with the hypothetical question ‘if an election were held tomorrow’, and probably issues with people not wanting to be honest.

Even so, the basic sampling uncertainty gives a good guide to the error in opinion polls, and anything that makes it easier to understand is worth having.

poll-land

(via Harkanwal Singh)

February 21, 2016

Evils of axis

From One News, tweeted by various people:

CbtrS2MUEAALlrk

The y-axis label is wrong: this has nothing to do with change, it’s percent support.

The x-axis label is maximally unhelpful: we can guess that the most recent poll is in February, but what are the earlier data? You might think the vertical divisions are months, but the story says the previous poll was in October.

Also, given that the measurement error is large compared to the expected changes, using a line graph without points indicating the observations is misleading.

Overall, the graph doesn’t add to the numbers on the right, which is a bit of a waste.

December 19, 2015

Punk’d

Earlier this year a current affairs program announced that they would have an interview with the man who didn’t get swallowed by a giant anaconda. Taken literally, this doesn’t restrict the options much.  There’s getting on for three billion men who haven’t been swallowed by giant anacondas; you probably know several yourself.  On the other hand, everyone knew which guy they meant.

There’s a branch of linguistics, called ‘pragmatics’, that studies how everyone knows what you mean in cases like this. The “Cooperative Principle” and Grice’s Maxims look at the assumption that everyone’s trying to move the conversation along and isn’t deliberately trolling.

One of the US opinion polling companies, Public Policy Polling, seems to make a habit of trolling its respondents.  This time, they asked whether people were in favour of bombing Agrabah.  30% of Republican supporters were. So were 19% of Democratic supporters, though for some reason this has been less widely reported. As you know, of course, since you are extremely well-read, Agrabah is not a town or region in Syria, nor is it held by Da’esh. It is, in fact, the fictional location of Disney’s Aladdin movie, starring among others the late, great Robin Williams.

I’m pretty sure that less than 30% even of Republican voters really support bombing a fictional country. In fact, I’d guess it’s probably less than 5%. But think about how the question was asked.  You’re a stereotypical Republican voter dragged away from quiet dinner with your stereotypical spouse and 2.3 stereotypical kids by this nice, earnest person on the phone who wants your opinion about important national issues.  You know there’s been argument about whether to bomb this place in the Middle East. You can’t remember if the name matches, but obviously if they’re asking a serious question that must be the place they mean. And it seemed like a good idea when it was explained on the news. Even the British are doing it. So you say “Support”.

The 30% (or 19%) doesn’t mean Republicans (or Democrats) want to bomb Aladdin. It doesn’t even mean they want to bomb arbitrary places they’ve never heard of. It means they were asked a question carefully phrased to sound as if it was about a genuine geopolitical controversy and they answered it that way.

When Ali G does this sort of thing to political figures, it’s comedy. When Borat does it to unsuspecting Americans it’s a bit dubious. When it’s mixed in with serious opinion polling, it risks further damaging what’s already a very limited channel for gauging popular opinion.