Posts filed under Surveys (172)

November 6, 2015

Failure to read small print

smallprint

This story/ad/column hybrid thing on the Herald site is making a good point, that people don’t read the detailed terms and conditions of things. Of course, reading the terms and conditions of things before you agree is often infeasible — I have read the Auckland Transport HOP card T&Cs, but I don’t reread them to make sure they haven’t changed every time I agree to them by getting on a bus, and it’s not as if I have much choice, anyway.  When the small print is about large sums of money, reading it is probably more important.

The StatsChat-relevant aspect, though is the figure of $1000 per year for failing to read financial small print, which seemed strange. The quote:

Money Advice Service, a government-backed financial help centre in the UK, claims failure to read the small print is costing consumers an average of £428 (NZ$978) a year. It surveyed 2,000 consumers and found that only 84 per cent bothered to read the terms and conditions and, of those that did, only 17 per cent understood what they had read.

Here’s the press release (PDF) from Money Advice Service.  It surveyed 3000 people, and found that 84 per cent claimed they didn’t read the terms and conditions.

The survey asked people how much they believed misunderstanding financial terms in the last year had cost them. The average cost was £427.90.

So the figure is a bit fuzzier: it’s the average of what people reported believing they lost, which actually makes it more surprising. If you actually believed you, personally, were losing nearly a thousand dollars a year from not reading terms and conditions, wouldn’t you do something about it?

More importantly, it’s not failure to read the small print, it’s failure to understand it. The story claims only 17% of those who claimed to read the T&Cs thought they understood them — though I couldn’t find this number in the press release or on the Money Advice site, it is in the Mirror and, unsourced, in the Guardian.  The survey claims about a third misunderstood what ‘interest’  meant and of the 15% who had taken out a payday loan, more than half couldn’t explain what a ‘loan’ was, and one in five didn’t realise loans needed to be paid back.

As further evidence that either the survey is unreliable or that it isn’t a simple failure to read that’s the problem, there was very little variation between regions of the UK in how many people said they read the small print, but huge variation (£128-£1014in how much they said it cost them.

I’m not convinced we can trust this survey, but it’s not news that some people make unfortunate financial choices.  What would be useful is some idea of how often it’s really careless failure to read, how often it’s lack of basic education, how often it’s gotchas in the small print, and how often it’s taking out a loan you know is bad because the alternatives are worse.

November 1, 2015

Twitter polls and news feeds

aje

I don’t know why this feels worse that the bogus clicky polls on newspaper websites. Maybe it’s the thought of someone actually believing the sampling scheme says something useful. Maybe it’s being in Twitter, where following a news headline feed usually gets you news headlines. Maybe it’s that the polls are so bad: restricting a discussion of Middle East politics to two options with really short labels makes even the usual slogan-based dialogue look good in comparison.

In any case, I really hope this turns out to be a failed experiment, and that we can keep Twitter polls basically as jokes.

 

October 22, 2015

Second-hand bogus poll

Headline1 in 3 women watch porn – survey

Opening sentence: One in three young women regularly view porn, with many watching it on their smartphone, it has emerged.

It turns out this is “Some 31 per cent of participants in the survey by magazine Marie Claire.” If you Google, you can find the Marie Claire invitation to take the survey, with a link. There are also Facebook and YouTube versions of the invitation. It’s a self-selected internet survey; a bogus poll.

Considered in the context of its original purpose, this survey isn’t so bad. It’s part of a major project(possibly NSFW) by the magazine, and its contributing editor Amanda de Cadenet, to discuss women’s use of pornography.  The survey provided a way for them to involve readers, and a context for telling readers, however they responded, “there are lots of other women like you”.  From that point of view the quantitative unreliability and poorly-defined target population isn’t such a problem, though it would presumably be better to have the right numbers.

Disconnected from the magazine and presented as data-based news, the survey results have very little going for them.

October 19, 2015

Flag referendum stats

UMR have done a survey of preferences on the new flag candidates that can be used to predict the preferential-voting result.  According to their data, while Red Peak has improved a long way from basically no support in August, it has only improved enough to be a clear third to the two Lockwood ferns, which are basically tied for the lead both on first preferences and on full STV count.  On the other hand, none of the new candidates is currently anywhere near beating the current version.

The error in a poll like this is probably larger than in an election poll, because there’s no relevant past data to work with. Also, for the second round of the referendum, it’s possible that cutting the proposals down to a single alternative will affect opinion. And, who knows, maybe Red Peak will keep gaining popularity.

September 21, 2015

It’s bad enough without exaggerating

This UK survey report is being a bit loose with the details, in a situation where that’s not even needed

stem for boys

The survey of more than 4,000 girls, young women, parents and teachers, demonstrates clearly that there is a perception that STEM subjects and careers are better suited to male personalities, hobbies and brains. Half (51 percent) of the teachers and 43 percent of the parents surveyed believe this perception helps explain the low uptake of STEM subjects by girls. [emphasis added]

Those aren’t the same thing at all.  I believe this perception helps explain the low uptake of STEM subjects by girls. Michelle ‘Nanogirl’ Dickinson believes this perception helps explain the low uptake of STEM subjects by girls. It’s worrying that nearly more than half of UK teachers don’t believe this perception helps explain the low uptake of STEM subjects by girls.

On the other hand, this is depressing and actually does seem to be what the survey said:

Nearly half (47 percent) of the young girls surveyed said they believe such subjects are a better match for boys.

as does this

difficult subjects It would fit with NZ experience if a lot of boys felt the same about the difficulty of science and maths, but that wouldn’t actually make it any better.

 

September 8, 2015

Petitions and other non-representative data

Stuff has a story about the #redpeak  flag campaign, including a clicky bogus poll that currently shows nearly 11000 votes in support of the flag candidate. While Red Peak isn’t my favourite (I prefer Sven Baker’s Huihui),  I like it better than the four official candidates. That doesn’t mean I like the bogus poll.

As I’ve written before, a self-selected poll is like a petition; it shows that at least the people who took part had the views they had. The web polls don’t really even show that — it’s pretty easy to vote two or three times. There’s also no check that the votes are from New Zealand — mine wasn’t, though most of them probably are.  The Stuff clicky poll doesn’t even show that 11,000 people voted for the Red Peak flag.

So far, this Stuff poll at least hasn’t been treated as news. However, the previous one has.  At the bottom of one of the #redpeak stories you can read

In a Stuff.co.nz poll of 16,890 readers, 39 per cent of readers voted to keep the current flag rather than change it. 

Kyle Lockwood’s Silver Fern (black, white and blue) was the most popular alternate flag design, with 27 per cent of the vote, while his other design, Silver Fern (red, white and blue), got 23 per cent. This meant, if Lockwood fans rallied around one of his flags, they could vote one in.

Flags designed by Alofi Kanter – the black and white fern – and Andrew Fyfe each got 6 per cent or less of the vote

They don’t say, but that looks very much like this clicky poll from an earlier Stuff flag story, though it’s now up to about 17500 votes

flagpoll

You can’t use results from clicky polls as population estimates, whether for readers or the electorate as a whole. It doesn’t work.

Over approximately the same time period there was a real survey by UMR (PDF), which found only 52% of people preferred their favourite among the four flags to the current flag.  The referendum looks a lot closer than the clicky poll suggests.

The two Lockwood ferns were robustly the most popular flags in the survey, coming  in as the top two for all age groups; men and women; Māori; and Labour, National and Green voters. Red Peak was one of the four least preferred in every one of these groups.

Only 1.5% of respondents listed Red Peak among their top four.  Over the whole electorate that’s still about 45000, which is why an online petition with 31000 electronic signatures should have about the impact it’s going to have on the government.

Depending on turnout, it’s going to take in the neighbourhood of a million supporting votes for a new flag to overturn the current flag. It’s going to take about the same number of votes ranking Red Peak higher than the Lockwood ferns for it to get on to the final ballot.

In the Stuff story, Graeme Edgeler suggests “Perhaps if there were a million people in a march” would be enough to change the government’s mind. He’s probably right, though I’d say a million estimated from a proper survey, or maybe fifty thousand in a march should be enough. For an internet petition, perhaps two hundred thousand might be a persuasive number, if there was some care taken that they were distinct people and eligible voters.

For those of us in a minority on flag matters, Andrew Geddis has a useful take

In fact, I’m pretty take-it-or-leave-it on the whole point of having a “national” flag. Sure, we need something to put up on public buildings and hoist a few times at sporting events. But I quite like the fact that we’ve got a bunch of other generally used national symbols that can be appropriated for different purposes. The silver fern for putting onto backpacks in Europe. The Kiwi for our armed forces and “Buy NZ Made” logos. The Koru for when we’re feeling the need to be all bi-cultural.

If you like Red Peak, fly it. At the moment, the available data suggest you’re in as much of minority as me.

June 18, 2015

Bogus poll story again

For a while, the Herald largely gave up basing stories on bogus clicky poll headlines. Today, though, there was a story about Gurpreet Singh,  who was barred from the Manurewa Cosmopolitan Club for refusing to remove his turban.

The headline is “Sikh club ban: How readers reacted”, and the first sentence says:

Two thirds of respondents to an online NZ Herald poll have backed the controversial Cosmopolitan Club that is preventing turbaned Sikhs from entering due to a ban on hats and headgear.

In some ways this is better than the old-style bogus poll stories that described the results as a proportion of Kiwis or readers or Aucklanders. It doesn’t make the number mean anything much, but presumably the sentence was at least true at the time it was written.

A few minutes ago I looked at the original story and the clicky poll next to it

turban

There are two things to note here. First, the question is pretty clearly biased: to register disagreement with the club you have to say that they were completely in the wrong and that Mr Singh should take his complaint further. Second, the “two thirds of respondents” backing the club has fallen to 40%. Bogus polls really are even more useless than you think they are, no matter how useless you think they are.

But it’s worse than that. Because of anchoring bias, the “two thirds” figure has an impact even on people who know it is completely valueless: it makes you less informed than you were before. As an illustration, how did you feel about the 40% figure in the new results? Reassured that it wasn’t as bad as the Herald had claimed, or outraged at the level of ignorance and/or bigotry represented by 40% support for the club?

 

June 15, 2015

Verbal abuse the biggest bullying problem at school: Students

StatsChat is involved with the biennial CensusAtSchool / TataurangaKiTeKura, a national statistics education project for primary and secondary school students. Supervised by teachers, students aged between 9 and 18 (Year 5 to Year 13) answer 35 questions in English or te reo Māori about their lives, then analyse the results in class. Already, more than 18,392 students from 391 schools all over New Zealand have taken part.

This year, for the first time, CAS asked students about bullying, a persistent problem in New Zealand schools.

School students think verbal mistreatment is the biggest bullying issue in schools – higher than cyberbullying, social or relational bullying such as social exclusion and spreading gossip, or physical bullying.

Students were asked how much they agreed or disagreed with statements about each type of bullying.  A total of 36% strongly agreed or agreed that verbal bullying was a problem among students at their school, followed by cyberbullying (31% agreed or strongly agreed), social or relational bullying (25% agreed or strongly agreed) and physical bullying (19% agreed or strongly agreed).

Read the rest of the press release here.

 

 

June 5, 2015

Peacocks’ tails and random-digit dialing

People who do surveys using random-digit phone number dialing tend to think that random-digit dialling or similar attempts to sample in a representative way are very important, and sometimes attack the idea of public-opinion inference from convenience samples as wrong in principle.  People who use careful adjustment and matching to calibrate a sample to the target population are annoyed by this, and point out that not only is statistical modelling a perfectly reasonable alternative, but that response rates are typically so low that attempts to do random sampling also rely heavily on explicit or implicit modelling of non-response to get useful results.

Andrew Gelman has a new post on this issue, and it’s an idea that I think should be taken more further (in a slightly different direction) than he seems to.

It goes like this. If it becomes widely accepted that properly adjusted opt-in samples can give reasonable results, then there’s a motivation for survey organizations to not even try to get representative samples, to simply go with the sloppiest, easiest, most convenient thing out there. Just put up a website and have people click. Or use Mechanical Turk. Or send a couple of interviewers with clipboards out to the nearest mall to interview passersby. Whatever. Once word gets out that it’s OK to adjust, there goes all restraint.

I think it’s more than that, and related to the idea of signalling in economics or evolutionary biology, the idea that peacock’s tails are adaptive not because they are useful but because they are expensive and useless.

Doing good survey research is hard for lots of reasons, only some involving statistics. If you are commissioning or consuming a survey you need to know whether it was done by someone who cared about the accuracy of the results, or someone who either didn’t care or had no clue. It’s hard to find that out, even if you, personally, understand the issues.

Back in the day, one way you could distinguish real surveys from bogus polls was that real surveys used random-digit dialling, and bogus polls didn’t. In part, that was because random-digit dialling worked, and other approaches didn’t so much. Almost everyone had exactly one home phone number, so random dialling meant random sampling of households, and most people answered the phone and responded to surveys.  On top of that, though, the infrastructure for random-digit dialling was expensive. Installing it showed you were serious about conducting accurate surveys, and demanding it showed you were serious about paying for accurate results.

Today, response rates are much lower, cell-phones are common, links between phone number and geographic location are weaker, and the correspondence between random selection of phones and random selection of potential respondents is more complicated. Random-digit dialling, while still helpful, is much less important to survey accuracy than it used to be. It still has a lot of value as a signalling mechanism, distinguishing Gallup and Pew Research from Honest Joe’s Sample Emporium and website clicky polls.

Signalling is valuable to the signaller and to consumer, but it’s harmful to people trying to innovate.  If you’re involved with a serious endeavour in public opinion research that recruits a qualitatively representative panel and then spends its money on modelling rather than on sampling, you’re going to be upset with the spreading of fear, uncertainty, and doubt about opt-in sampling.

If you’re a panel-based survey organisation, the challenge isn’t to maintain your principles and avoid doing bogus polling, it’s to find some new way for consumers to distinguish your serious estimates from other people’s bogus ones. They’re not going to do it by evaluating the quality of your statistical modelling.

 

May 26, 2015

Who is my neighbour?

The Herald has a story with data from the General Social Survey. Respondents were asked if they would feel comfortable with a neighbour who was from a religious minority, LGBT, from an ethnic or racial minority, with mental illness, or a new migrant.  The point of the story was that the figure was about 50% for mental illness, compared to about 75% for the other groups. It’s a good story; you can go read it.

What I want to do here is look at how the 75% varies across the population, using the detailed tables that StatsNZ provides. Trends across time would have been most interesting, but this question is new, so we can’t get them. As a surrogate for time trends, I first looked at age groups, with these results [as usual, click to embiggen]

neighour-age

There’s remarkably little variation by age: just a slight downturn for LGBT acceptance in the oldest group. I had expected an initial increase then a decrease: a combination of a real age effect due to teenagers growing up, then a cohort effect where people born a long time ago have old-fashioned views. I’d also expected more difference between the four questions over age group.

After that, I wasn’t sure what to expect looking at the data by region. Again, there’s relatively little variation.

neighbour-region

For gender and education at least the expected relationships held: women and men were fairly similar except that men were less comfortable with LGBT neighbours, and comfort went up with education.

neighour-sexeduc

Dividing people up by ethnicity and migrant status was a mixture of expected and surprising. It’s not a surprise that migrants are happier with migrants as neighbours, or, since they are more likely to be members of religious minorities, that they are more comfortable with them. I was expecting migrants and people of Pacific or Asian ethnicity to be less comfortable with LGBT neighbours, and they were. I wasn’t expecting Pacific people to be the least comfortable with neighbours from an ethnic or racial minority.

neighbour-ethnic

As always with this sort of data it’s important to remember these responses aren’t really level of comfort with different types of neighbours. They aren’t even really what people think their level of comfort would be with different types of neighbours, just whether they say they would be comfortable. The similarity across the four questions makes me suspect there’s a lot of social conformity bias creeping in.