Posts filed under Politics (191)

January 9, 2018

Election maps: what’s the question?

XKCD has come out with a new map of the 2016 US election

In about 2008 I made a less-artistic one of the 2004 elections on similar principles

These maps show some useful things about the US vote:

  1. the proportions for the two parties are pretty close, but
  2. most of the land area has very few voters, and
  3. most areas are relatively polarised
  4. but not as polarised as you think, eg, look at the cities in Texas

What these maps are terrible at is showing changes from one election to the next. The map for 2004 (Republicans ahead by about 2.5%) and 2016 (Republicans behind by about 3%) look very similar. And even 2008 (Republicans behind by 7%) wouldn’t look that different.

Like a well-written thousand words, a well-drawn picture needs to be about something. Questions matter. The data don’t speak for themselves.

December 15, 2017

Public comments, petitions, and other self-selected samples

In the US, the Federal Communications Commission was collecting public comments about ‘net neutrality’ — an issue that’s commercially and politically sensitive in a country where many people don’t have any real choice about their internet provider.

There were lots of comments: from experts, from concerned citizens, from people who’d watched a John Oliver show. And from bots faking real names and addresses on to automated comments.   The Wall Street Journal contacted a random sample of nearly 3000 commenters and found the majority of those they could get in contact with had not submitted the comment attached to their details.  The StartupPolicyLab attempted to contact 450,000 submitters, and got a response from just over 8000. Of the 7000 contacted about pro-neutrality comments, nearly all agreed they had made the comment, but of the 1000 responses about anti-neutrality comments, about 88% said they had not made the comment.

It’s obviously a bad idea to treat the comments as a vote. Even if the comments were from real US people, with one comment each, you’d need to do some sort of modelling of the vast majority who didn’t comment.  But what are they good for?

One real benefit is for people to provide ideas you hadn’t thought of.  The public comment process on proposed New Zealand legislation certainly allows for people like Graeme Edgeler to point out bugs in the drafting, and for people whose viewpoints were not considered to speak out.  For this, it doesn’t matter what the numbers of comments are, for and against. In fact, it helps if people who don’t have something to say don’t say it.

With both petitions and public comments there’s also some quantitative value in showing that concern about some issue you weren’t worrying about isn’t negligibly small; that thousands (in NZ) or hundreds of thousands (in the US) care about it.

But if it’s already established that an issue is important and controversial, and you care about the actual balance of public opinion, you should be doing a proper opinion poll.

November 15, 2017

Summarising house prices

From the Herald (linking to this story)

pricefall

To begin with, “worst” is distinctly unfortunate now we’ve finally got a degree of political consensus that Auckland house prices are too high. “Best” might be too much to hope for, but at least we could have a neutral term.

More importantly, as the story later concedes, it’s more complicated than that.

It’s not easy to decide what summary of housing prices is ideal.  This isn’t just about mean vs median and the influence of the priciest 1%, though that comes into it.  A bigger problem is that houses are all individuals.  Although the houses sold this October are, by and large, not the same houses that were sold last October, the standard median house price summary compares the median price of one set of houses to the median price of the other set.

When the market is stable, there’s no real problem. The houses sold this year will be pretty much the same as those sold last year. But when the market is stable, there aren’t interesting stories about real-estate prices.  When the market is changing, the mix of houses being compared  can change. In this case, that change is the whole story.

In Auckland as a whole, the median price fell 3.2%. In the old Auckland City — the isthmus — the median price fell 17%. But

Home owners shouldn’t panic though. That doesn’t mean the average house price has fallen by anything like that much.

The fall in median has been driven largely by an increasing number of apartments coming onto the market in the past year.

That is, the comparison of this October’s homes to last October’s homes is inappropriate — they aren’t similar sets of properties.  This year’s mix has many more apartments; apartments are less expensive; so this year’s mix of homes has a lower median price.

The story does admit to the problem with the headline, but it doesn’t really do anything to fix it.  A useful step would be to separate prices for apartments and houses (and maybe also for townhouses if they can be defined usefully) and say something about the price trends for each.   A graph would be a great way to do this.

Separating out changes in the mix of homes on sale from general house price inflation or deflation is also helpful in policy debates. Changing the mix of housing allows us to lower the price of housing by more than we lower the value of existing houses, and would be valuable for the Auckland public to get a good feeling for the difference.

Bogus poll headlines justified

The Australian postal survey on marriage equality was a terrible idea.

It was a terrible idea because that sort of thing shouldn’t be a simple majority decision.

It was a terrible idea because it wasn’t even a vote, just a survey.

It was a terrible idea because it wasn’t even a good survey, just a bogus poll.

As I repeatedly say, bogus polls don’t tell you anything much about people who didn’t vote, and so they aren’t useful unless the number voting one particular way is a notable proportion of the whole eligible population. In the end, it was.

A hair under 50% of eligible voters said ‘Yes’, just over 30% said ‘No’, and about 20% didn’t respond.

And, in what was not at all a pre-specified hypothesis, Tony Abbott’s electoral division of Warringah had an 84% participation rate and 75% ‘Yes’, giving 63% of all eligible voters indicating ‘yes’.

 

PS: Yay!

September 27, 2017

Stat Soc of Australia on Marriage Survey

The Statistical Society of Australia has put out a press release on the Australian Marriage Law Postal Survey.  Their concern, in summary, is that if this is supposed to be a survey rather than a vote, the Government has required a pretty crap survey and this isn’t good.

The SSA is concerned that, as a result, the correct interpretation of the Survey results will be missed or ignored by some community groups, who may interpret the resulting proportion for or against same-sex marriage as representative of the opinion of all Australians. This may subsequently, and erroneously, damage the reputation of the ABS and the statistical community as a whole, when it is realised that the Survey results can not be understood in these terms.

and

The SSA is not aware of any official statistics based purely on unadjusted respondent data alone. The ABS routinely adjusts population numbers derived from the census to allow for under and over enumeration issues via its post-enumeration survey. However, under the Government direction, there is there no scope to adjust for demographic biases or collect any information that might enable the ABS to even indicate what these biases might be.

If the aim was to understand the views of all Australians, an opinion survey would be more appropriate. High quality professionally-designed opinion surveys are routinely carried out by market research companies, the ABS, and other institutions. Surveys can be an efficient and powerful tool for canvassing a population, making use of statistical techniques to ensure that the results are proportioned according to the demographics of the population. With a proper survey design and analysis, public opinion can be reliably estimated to a specified accuracy. They can also be implemented at a fraction of the cost of the present Postal Survey. The ABS has a world-class reputation and expertise in this area.

(They’re not actually saying this is the most important deficiency of the process, just that it’s the most statistical one)

September 24, 2017

The polls

So, how did the polls do this time? First, the main result was predicted correctly: either side needs a coalition with NZ First.

In more detail, here are the results from Peter Ellis’s forecasts from the page that lets you pick coalitions.

Each graph has three arrows. The red arrow shows the 2014 results. The blue/black arrow pointing down shows the current provisional count and the implied number of seats, and the horizontal arrow points to Graeme Edgeler’s estimate of what the special votes will do (not because he claims any higher knowledge, but because his estimates are on a web page and explain how he did it).

First, for National+ACT+UnitedFuture

national

Second, for Labour+Greens

labgrn

The result is well within  the uncertainty range of the predictions for Labour+Greens, and not bad for  National. This isn’t just because NZ politics is easy to predict: the previous election’s results are much further away. In particular, Labour really did gain a lot more votes than could reasonably have been expected a few months ago.

 

Update: Yes, there’s a lot of uncertainty. And, yes, that does  mean quoting opinion poll results to the nearest 0.1% is silly.

September 20, 2017

Democracy is coming

Unless someone says something really annoyingly wrong about polling in the next few days, I’m going to stop commenting until Saturday night.

Some final thoughts:

  • The election looks closer than NZ opinion polling is able to discriminate. Anyone who thinks they know what the result will be is wrong.
  • The most reliable prediction based on polling data is that the next government will at least need confidence and supply from NZ First. Even that isn’t certain.
  • It’s only because of opinion polling that we know the election is close. It would be really surprising if Labour didn’t do a lot better than the 25% they managed in the 2014 election — but we wouldn’t know that without the opinion polls.

 

 

September 10, 2017

Why you can’t predict Epsom from polls

The Herald’s poll aggregator had a bit of a breakdown over the Epsom electorate yesterday, suggesting that Labour had a chance of winning.

Polling data (and this isn’t something a statistician likes saying) is essentially useless when it comes to Epsom, because neither side benefits from getting their own supporters’ votes. National supporters are a clear majority in the electorate. If they do their tactical voting thing properly and vote for ACT’s David Seymour, he will win.  If they do the tactical voting thing badly enough, and the Labour and Green voters do it much better, National’s Paul Goldsmith will win.

Opinion polls over the whole country don’t tell you about tactical voting strategies in Epsom. Even opinion polls in Epsom would have to be carefully worded, and you’d have to be less confident in the results.

There isn’t anywhere else quite like Epsom. There are other electorates that matter and are hard to predict — such as Te Tai Tokerau, where polling information on Hone Harawira’s popularity is sparse — but in those electorates the polls are at least asking the right question.

Peter Ellis’s poll aggregator just punts on this question: the probability of ACT winning Epsom is set at an arbitrary 80%, and he gives you an app that lets you play with the settings. I think that’s the right approach.

September 4, 2017

Before and after

We’re in the interesting situation this election where it looks like political preferences are actually changing quite rapidly (though some of this could be changes in non-response that don’t show up in actual voting).

On Thursday, One News released a poll by Colmar Brunton that found Labour ahead of National by 43% to 41% for the first time in years.  Yesterday, NewsHub released a Reid Research poll with Labour back behind National 39% to 43%.

“Released” is important here. The Colmar Brunton poll was taken over August 26-30. The Reid Research poll was taken over August 22-30. That is, despite being released  later, the Reid Research poll was (on average) taken earlier. Comments (and even analysis) of polls often ignore the interview time and focus on the release date, but here we can see why the code of conduct for pollers requires the interview period to be described.

A difference of 4 percentage points in Labour’s support is quite large for two polls of this size (though not out of the question just from sampling error). If the polls were really discrete events four days apart, it would be plausible to argue they showed Labour’s support had stopped increasing — that the Ardern effect had reached its limit. If the two polls were taken over exactly the same period, the most plausible conclusion would be that the true support was in between and that we knew nothing more about Labour’s trajectory. With the Sunday poll actually taken slightly earlier, the difference is still likely to mostly be noise, but to the (very limited) extent that it says anything about trajectory, the story is positive for Labour.

August 26, 2017

Successive approximations to understanding MMP

The MMP voting system and its implications are relatively complicated. I’m going to try to give simple approximations and then corrections to them. If you want more definitive details, here’s the Electoral Commission and the Electoral Act.

Two votes: You have an electorate vote, which only affects who your local MP is, and doesn’t affect the composition of Parliament. You also have a party vote that affects the composition of Parliament, but not who your local MP is. The number of seats a party gets in Parliament is proportional to the number of party votes it gets.

This isn’t true, but it’s actually a pretty good working approximation for most of us.

There are two obvious flaws. First, if your local MP belongs to a party that doesn’t get enough votes to have any seats in Parliament, they still get to be an MP. Peter Dunne in Ōhariu was an example of this in the 2014 election. Second, when working out the number of seats a party is entitled to in Parliament, parties with less than 5% of the vote are excluded unless they won some electorate.  In the 2014 election, the Conservative Party got 3.97% of the vote, but no seats.

The Māori Party was an example of both exceptions: they did get enough votes in proportional terms for two seats, but not enough to make the 5% cutoff, but they didn’t have to because Te Ururoa Flavell won the Waiāriki electorate seat for them.

Proportionality: There are 120 seats, so a party needs 1/120th, or about 0.83%, of the vote for each one.

That’s not quite true because of the 5% threshold, both because some parties miss out and because the relevant percentages are of the votes remaining after parties have been excluded by the threshold.

It’s also not true because of rounding.  We elect whole MPs, not fractional ones, so we need a rounding rule. Roughly speaking, half -seats round up. More accurately, suppose there is some number N of votes available per seat (which will be worked out later). If you have at least 0.5×N votes you get one seat, 1.5×N gets you two seats, 13.5×N gets you fourteen seats.  So what’s N? It’s roughly 1/121th (0.83%) of the votes; it’s exactly whatever number you need to allocate exactly as many seats as you have available. (The Electoral Commission actually uses a procedure that’s identical in effect to this one and easier to compute, but (I think) harder to explain).

In 2014, the Māori Party got 1.32% of the vote, which is a bit more than 1.5×0.83%, and were entitled to two seats. ACT got less than 0.83% but more than 0.5×0.83% and were entitled to one seat.

Finally, if a party gets more seats from electorate candidates than it is due by proportionality those seats are extra, above the 120-seat ideal size of Parliament — except that seats won by a party or individual not contesting the party vote do come out of the 120-seat total.  So, in 2014, ACT got enough party votes to be due one of the 120 seats, but United Future didn’t. United Future did contest the party vote so Peter Dunne’s seat did not come out of the 120-seat total — he was an ‘overhang’ 121st MP. I’m guessing the reason overhangs by parties contesting the party vote are extra is that you don’t know how many there will be until you’ve done the calculation, so you’d have to go back to the start and recalculate if you counted them in the 120 (which might change the number of over-allocated seats and force another recalculation and so on).

Māori Roll: People of Māori descent can choose, every five years, to be on a Māori electoral roll rather than the general roll. If enough of them do, Māori electorates are created with the same number of people as the general electorates. There are currently seven Māori electorates, representing just over half of the people of Māori descent.  As with any electorate, you don’t have to be enrolled there to stand there; anyone eligible to be an MP can stand. 

The main way this is oversimplified is because of the people of Māori descent who aren’t on either roll, because they’re too young or just not enrolled yet. You can’t tell whether they would be on the general roll or the Māori roll, so there are procedures for StatsNZ to split the non-enrolled Māori-descent population up to calculate electorate populations.