Posts filed under Polls (119)

July 27, 2016

In praise of NZ papers

I whinge about NZ papers a lot on StatsChat, and even more about some of the UK stories they reprint. It’s good sometimes to look at some of the UK stories they don’t reprint.  From the Daily Express

express

The Brexit enthusiast and cabinet Minister John Redwood says “The poll is great news, well done to the Daily Express.” As he seems to be suggesting, you don’t get results like this just by chance — having an online bogus poll on the website of an anti-Europe newspaper is a good start.

(via Antony Unwin)

July 19, 2016

Polls over petitions

I mentioned in June that Generation Zero were trying to crowdfund an opinion poll on having a rail option in the Auckland’s new harbour crossing.

Obviously they’re doing this because they think they know what the answer will be, but it’s still a welcome step towards evidence-based lobbying.

The results are out, in a poll conducted by UMR. Well, a summary of the results is out, in a story at The Spinoffand we can hope the rest of the information turns up on Generation Zero’s website at some point. A rail crossing is popular, even when its cost is presented as part of the question:

HarbourCrossingGraph

The advantage of proper opinion polls over petitions or other sort of bogus polls is the representativeness.  If 50,000 people sign a petition, all you know is that the true number of supporters is at least 50,000 (and maybe not even that).  Sometimes there will be one or two silent supporters for each petition vote (as with Red Peak); sometimes many more; sometimes fewer.

Petitions do have the advantage that you feel as if you’re doing something when you sign, but we can cope without that: after all, we still have social media.

May 24, 2016

Microplummeting

Headline: “Newshub poll: Key’s popularity plummets to lowest level”

Just 36.7 percent of those polled listed the current Prime Minister as their preferred option — down 1.6 percent — from a Newshub poll in November.

National though is steady on 47 percent on the poll — a drop of just 0.3 percent — and similar to the Election night result.

So, apparently, 0.3% is “steady” and 1.6% is a “plummet”.

The reason we quote ‘maximum margin of error’, even though it’s a crude summary, not a good way to describe evidence, underestimates variability, and is a terribly misleading phrase, is that it at least gives some indication of what is worth headlining.  The maximum margin of error for this poll is 3%, but the margin of error for a change is 1.4 times higher, about 4.3%.

That’s the maximum margin of error, for a 50% true value, but it doesn’t make that much difference– I did a quick simulation to check. If nothing happened, the Prime Minister’s measured popularity would plummet or soar by more than 1.6% between two polls about half the time purely from sampling variation.

 

April 28, 2016

Marking beliefs to market

Back in August, I wrote

Trump’s lead isn’t sampling error. He has an eleven percentage point lead in the poll averages, with sampling error well under one percentage point. That’s better than the National Party has ever managed. It’s better than the Higgs Boson has ever managed.

Even so, no serious commentator thinks Trump will be the Republican candidate. It’s not out of the question that he’d run as an independent — that’s a question of individual psychology, and much harder to answer — but he isn’t going to win the Republican primaries.

Arguably that was true: no serious commentator, as far as I know, did think Trump would be the Republican candidate.  But he is going to win the Republican primaries, and the opinion polls haven’t been all that badly wrong about him — better than the experts.

March 11, 2016

Getting to see opinion poll uncertainty

Rock’n Poll has a lovely guide to sampling uncertainty in election polls, guiding you step by step to see how approximate the results would be in the best of all possible worlds. Highly recommended.

Of course, we’re not in the best of all possible worlds, and in addition to pure sampling uncertainty we have ‘house effects’ due to different methodology between polling firms and ‘design effects’ due to the way the surveys compensate for non-response.  And on top of that there are problems with the hypothetical question ‘if an election were held tomorrow’, and probably issues with people not wanting to be honest.

Even so, the basic sampling uncertainty gives a good guide to the error in opinion polls, and anything that makes it easier to understand is worth having.

poll-land

(via Harkanwal Singh)

February 21, 2016

Evils of axis

From One News, tweeted by various people:

CbtrS2MUEAALlrk

The y-axis label is wrong: this has nothing to do with change, it’s percent support.

The x-axis label is maximally unhelpful: we can guess that the most recent poll is in February, but what are the earlier data? You might think the vertical divisions are months, but the story says the previous poll was in October.

Also, given that the measurement error is large compared to the expected changes, using a line graph without points indicating the observations is misleading.

Overall, the graph doesn’t add to the numbers on the right, which is a bit of a waste.

December 19, 2015

Punk’d

Earlier this year a current affairs program announced that they would have an interview with the man who didn’t get swallowed by a giant anaconda. Taken literally, this doesn’t restrict the options much.  There’s getting on for three billion men who haven’t been swallowed by giant anacondas; you probably know several yourself.  On the other hand, everyone knew which guy they meant.

There’s a branch of linguistics, called ‘pragmatics’, that studies how everyone knows what you mean in cases like this. The “Cooperative Principle” and Grice’s Maxims look at the assumption that everyone’s trying to move the conversation along and isn’t deliberately trolling.

One of the US opinion polling companies, Public Policy Polling, seems to make a habit of trolling its respondents.  This time, they asked whether people were in favour of bombing Agrabah.  30% of Republican supporters were. So were 19% of Democratic supporters, though for some reason this has been less widely reported. As you know, of course, since you are extremely well-read, Agrabah is not a town or region in Syria, nor is it held by Da’esh. It is, in fact, the fictional location of Disney’s Aladdin movie, starring among others the late, great Robin Williams.

I’m pretty sure that less than 30% even of Republican voters really support bombing a fictional country. In fact, I’d guess it’s probably less than 5%. But think about how the question was asked.  You’re a stereotypical Republican voter dragged away from quiet dinner with your stereotypical spouse and 2.3 stereotypical kids by this nice, earnest person on the phone who wants your opinion about important national issues.  You know there’s been argument about whether to bomb this place in the Middle East. You can’t remember if the name matches, but obviously if they’re asking a serious question that must be the place they mean. And it seemed like a good idea when it was explained on the news. Even the British are doing it. So you say “Support”.

The 30% (or 19%) doesn’t mean Republicans (or Democrats) want to bomb Aladdin. It doesn’t even mean they want to bomb arbitrary places they’ve never heard of. It means they were asked a question carefully phrased to sound as if it was about a genuine geopolitical controversy and they answered it that way.

When Ali G does this sort of thing to political figures, it’s comedy. When Borat does it to unsuspecting Americans it’s a bit dubious. When it’s mixed in with serious opinion polling, it risks further damaging what’s already a very limited channel for gauging popular opinion.

December 11, 2015

Against sampling?

Stuff has a story from the Sydney Morning Herald, on the claim that smartphones will be obsolete in five years. They don’t believe it. Neither do I, but that doesn’t mean we agree on the reasons.  The story thinks not enough people were surveyed:

The research lab surveyed 100,000 people across its native Sweden and 39 other countries.

With around 1.9 billion smartphone users globally, this means ConsumerLab covered just 0.0052 per cent of active users for its study.

This equates to about 2500 in each country; the population of Oberon

If you don’t recognise Oberon, it’s a New South Wales town slightly smaller than Raglan.

Usually, the Sydney Morning Herald doesn’t have such exacting standards for sample size. For example, their recent headline “GST rise backed by voters if other taxes cut: Fairfax-Ipsos poll” was based on 1402 people, about the population of Moerewa.

The survey size is plenty large enough if it was done right. You don’t, as the saying goes, have to eat the whole egg to know that it’s rotten. If you have a representative sample from a population, the size of the population is almost irrelevant to the accuracy of survey estimates from the sample. That’s why opinion polls around the world tend to sample 1000-2000 people, even though that’s 0.02-0.04% of the population of New Zealand, 0.004%-0.009% of the population of Australia, or 0.0003-0.0006% of the population of the USA.

What’s important is whether the survey is representative, which can be achieved either by selecting and weighting people to match the population, or by random sampling, or in practice by a mixture of the two.  Unfortunately, the story completely fails to tell us.

Looking at the Ericsson ConsumerLab website, it doesn’t seem that the survey is likely to be representative — or at least, there aren’t any details that would indicate it is.  This means it’s like, say, the Global Drug Survey,  which also has 100,000 participants, out of over 2 billion people worldwide who use alcohol, tobacco, and other drugs, and which Stuff  and the SMH have reported on at great length and without the same skepticism.

December 8, 2015

What you do know that isn’t so

The Herald (and others) are reporting an international Ipsos-Mori poll on misperceptions about various national statistics.  Two of the questions are things I’ve written about before: crude wealth inequality and proportion of immigrants.

New Zealanders on average estimated that 37% of our population are immigrants.  That’s a lot — it’s more than New York or London. The truth is 25%, which is still higher than most of the other countries. Interestingly, the proportion of immigrants in Auckland is quite close to 37%, and a lot of immigration-related news seems to focus on Auckland.   I think the scoring system based on absolute differences is unfair to NZ here: saying 37% when the truth is 25% doesn’t seem as bad as saying 10% when the truth is 2% (as in Japan).

We also estimated that 1% of the NZ population own 50% of the wealth. Very similar estimates came from a lot of countries, so I don’t think this is because of coverage of inequality in New Zealand.  My guess is that we’re seeing the impact of the Credit Suisse reports (eg, in Stuff), which say 50% of the world’s wealth is owned by the top 1%.  Combined with the fact that crude wealth inequality is a bogus statistic anyway, the Credit Suisse reports really seem to do more harm than good for public knowledge.

September 28, 2015

Seeing the margin of error

A detail from Andrew Chen’s visualisation of all the election polls in NZ:

polls

His full graph is somewhat interactive: you can zoom in on times, select parties, etc. What I like about this format is how clear it makes the poll-to-poll variability.  The poll result for, say, National isn’t a line, it’s a cloud of uncertainty.

The cloud of uncertainty gets narrower for minor parties (as detailed in my cheatsheet), but for the major parties you can see it span an entire 10-percentage-point grid cell or more.