Posts written by Andrew Balemi (3)

avatar

Andrew Balemi is a Professional Teaching Fellow in the Department of Statistics at The University of Auckland. He is a former Head of Marketing Science at market research company Colmar Brunton.

May 4, 2021

Dr Sally-Ann Harbison NZOM

Congratulations to Dr Sally-Ann Harbison for her award of membership to the NZ Order of Merit in the New Year’s honours list of 2021. This acknowledges her service to forensic science.

Dr Sally-Ann Harbison leads the Forensic Biology Team at the Institute of Environmental Science and Research (ESR). Dr Harbison has a joint appointment with the University of Auckland’s Department of Statistics.

Dr Harbison initially joined ESR’s precursor, DSIR, in 1988 in the chemistry division, where she focused on crime scene and evidence examinations including identifying body fluids and blood grouping.

Her work led her to play a significant part in many prominent New Zealand cases. With ESR, she has been a major contributor to the development and application of New Zealand’s Forensic DNA capability. In 1999 she worked on the first homicide case that was solved by using the DNA Profile Databank.

She has been the case manager in a number of old cases that are being reviewed with more modern DNA methods. She has collaborated with colleagues worldwide and represented ESR on various international committees. She has spoken at many international conferences and meetings and written more than 60 publications in her field. She has taught at University of Auckland since 1996, supervising more than 60 MSc and PhD students.

Dr Harbison has led the Biology Specialist Advisory Group of the Australia/New Zealand Forensic Executive Committee and been an assessor for both Australian and American accreditation bodies.

November 3, 2011

For whom the belle polls

TV3 on Tuesday reported that “early results in a poll suggest Labour’s Phil Goff won the debate with Prime Minister John Key last night.” The poll was by RadioLIVE and Horizon.

The TV piece concluded by lambasting a recent One News text poll, saying: “A One News text poll giving the debate to Mr Key 61-39 has been widely discredited, since it cost 75c to vote.”

This text poll should be lambasted if it is used to make inference about the opinions of the population of eligible willing voters. Self-selection is the major problem here: those who can be bothered have selected themselves.

The problem: there is no way of ascertaining that this sample of people is a representative sample of willing voters. Only the interested and motivated, who text message, have answered and, clearly, the pollsters do not have information on the not-so-interested and motivated non-texters.

The industry standard is to randomly select from all eligible willing voters and to adjust for non-response.  The initial selection is random and non-response is reduced as much as possible. This is to ensure the sample is as representative of the population as possible. The sample is collected via a sampling frame which, hopefully, is a comprehensive a list of the population you wish to talk to.  For CATI polls the sample frame is domestic landline (traditional) phone numbers.

With election polls, as landlineless voters have been in lower socio-economic groups which tend to have lower voter participation, this has not been so much of a problem.  The people we wish to talk to are eligible willing voters – so the polls have not been unduly biased by not including these landlineless people.

However, as people move away from landlines to mobile phones, CATI interviewing has been increasingly criticised. Hence alternatives have been developed, such as panel-polls and market prediction polls like IPredict – and the latter will be the subject of a post for another day.

But let’s go back to the Horizon panel poll mentioned above. It claims that it’s to be trusted as it has sampled from a large population of potential panellists who have been recruited and can win prizes for participation. The Horizon poll adjusts for any biases by reweighting the sample so that it’s more like the underlying New Zealand adult population – which is good practice in general.

However, the trouble is this large sampling frame of potential panellists have been self-selected. So who do they represent?

To illustrate, it’s hard to imagine people from more affluent areas feeling the need to get rewards for being on a panel. Also, you enrol via the internet and clearly this is biased towards IT-savvy people. Here the sampling frame is biased, with little or no known way to adjust for any biases bought about from this self-selection problem. They may be weighted to look like the population but they may be fundamentally different in their political outlook.

Panel polls are being increasingly used by market researchers and polling companies. With online panel polls it’s easier to obtain samples, collect information and transfer it, without all the bother involved in traditional polling techniques like CATI.

I believe the industry has been seduced by these features at the expense of representativeness – the bedrock of all inference. Until such time as we can ensure representativeness, I remain sceptical about any claims from panel polls.

I believe the much-maligned telephone (CATI) interviewing, which is by no means perfect, still remains the best of a bad lot.

October 25, 2011

All about election polls

November 26 is Election Day, and from now on, you’ll be getting election polls from all directions. So which ones can you trust?  The easy answer is: none of them.  However, some polls are worth more than others.

Assess their worth by asking these questions:

  • Who is commissioning the poll? Is this done by an objective organisation or is it done by those who have a vested interest? Have they been clear about any conflict of interest?
  • How have they collected the opinions of a representative sample of eligible voters? One of the cardinal sins of polls is to get people to select themselves (self-selection bias) to volunteer their opinions, like those ‘polls’ you see on newspaper websites. Here, you have no guarantee that the sample is representative of voters. “None of my mates down at the RSA vote that way, so all the polls are wrong” is a classic example of how self-selection  manifests itself.
  • How did they collect their sample? Any worthy pollster will have attempted to contact a random sample of voters via some mechanism that ensures that they have no idea who, beforehand, they will be able to contact.  One of the easiest ways is via computer-aided interviewing (CATI) of random household telephone numbers (landlines), typically sampled in proportion to geographical regions with a rural/urban split (usually called a stratified random sample). A random eligible voter needs to be selected from that household – and it won’t necessarily be the person who most often answers the phone! A random eligible voter is usually found by asking which of the household’s eligible voters had the most recent birthday and talking to that person.  But the fact that not all households have landlines is an increasing concern with CATI interviewing.  However, in the absence of any substantiated better technique, CATI interviewing remains the industry standard.
  • What about people who refuse to cooperate? This is called non-response. Any pollster should try to reduce this as much as possible by re-contacting households that did not answer the phone first time around, or, if the first call found the person with the most recent birthday wasn’t home, try to get hold of them.  If the voter still refuses, they become a ‘non-respondent’ and attempts should be made to re-weight the data so that this non-response effect is diminished. The catch is that the data is adjusted on the assumption that the respondents selected represented the opinion of a non-respondent on whom, by definition, we have no information. This is a big assumption that rarely gets verified. Any worthy polling company will mention non-response s and discuss how they attempt to adjust for them. Don’t trust any outfits that are not willing to discuss this!
  • Has the polling company asked reasonable, unambiguous questions? If the voters are confused by the question, their answers will be too. The pollsters need to state what questions have been asked and why. Any fool can ask questions – asking the right question is one of the most important skills in polling. Pollsters should openly supply detail on what they ask and how they ask it.
  • How can a sample of, say, 1000 randomly-selected voters represent the opinions of 3 million potential voters? This is one of the truly remarkable aspects of random sampling. The thing to realise is that whilst this a very small sub-sample of voters, provided they have been randomly selected, the precision of this estimate is determined by the amount of information you have collected, not the proportion of the total population (provided this sampling fraction is quite small e.g. 1000 out of 3 million).
  • What is the margin of error (MOE)?  It’s a measure of precision. It measures the price paid for not taking a complete census of the data, which happens once every three years on Election Day, which we call in statistical terms a population result. The MOE is based on behaviour of all similar possible poll results we could have selected (for a given level of confidence which is usually taken to be 95%). Once we know what that behaviour is (via probability theory and suitable approximations) we can then use the data that has been collected to make inference about the population that interests us. We know that 95% of all possible poll results plus or minus their MOE include the true unknown population value. Hence, we say we are 95% confident that a poll result contains the population value.
  • When we see quoted a MOE of 3.1% (from random sample of n=1000 eligible voters), how has it been calculated? It is, in fact, the maximum margin of error that could have been obtained for any political party. It is only really valid for parties that are close to the 50% mark (National and Labour are okay here, but it is irrelevant for, say, NZ First, whose support is closer to 5%). So if National is quoted a having a party vote of 56%, we are 95% confident that the true population value for National support is anywhere between 56% plus or minus 3.1% or about 53% to 59% – in this case, indicating a majority.
  • Saying that a party is below the margin of error is saying it has few people supporting it, and not much else. Its MOE will be much lower than the maximum MOE. For back-of-the-envelope calculations, the maximum MOE for a party is approximately =1/(square root of n), e.g. If n=1000 random voters  are sampled then  MOE  1/(square root of 1000) =1/31.62 =3.1%.
  • Comparing parties become somewhat more complicated.  If National are up, then no doubt Labour will be down. So to see if National has a lead on Labour, we have to adjust for this negative correlation. A rough rule of thumb for comparing parties sitting around 50% is to see if they differ by more than 2xMOE. So if Labour has 43% of the party vote and National 53% (with MOE = 3.1% from n=1000) we can see that this 10% lead is greater than 2×3.1=6.2% – indicating that we have evidence to believe that this lead of National is ‘real’, or statistically significant.
  • Note that any poll result only represents the opinion of those sampled at the place and time. As a week is a long time in politics, and most polls become obsolete very quickly. Note also that poll results now can affect poll results tomorrow, so these results are fluid, not fixed.

If you’re reading, watching or listening to poll results, be aware of their limitations. But note that although polls are fraught with difficulties, they remain useful. Any pollster who is open about the limitations of his or her methods is to be trusted over those who peddle certainty based on uncertain or biased information.