Posts filed under Surveys (135)

March 25, 2014

On a scale of 1 to 10

Via @neil_, an interactive graph of ratings for episodes of The Simpsons

simpsons

 

This comes from graphtv, which lets you do this for all sorts of shows (eg, Breaking Bad, which strikingly gets better ratings as the season progresses, then resets)

The reason the Simpsons graph has extra relevance to StatsChat is the distinctive horizontal line.  For the first ten seasons an episode basically couldn’t get rated below 7.5, after that it basically couldn’t rated above 7.5.   In the beginning there were ‘typical’ episodes and ‘good’ episodes; now there are ‘typical’ episodes and ‘bad’ episodes.

This could be a real change in quality, but it doesn’t match up neatly with the changes in personnel and style.  It could be a change in the people giving the ratings, or in the interpretation of the scale over time. How could we tell? One clue is that (based on checking just a handful of points) in the early years the high-rating episodes were rated by more people, and this difference has vanished or even reversed.

March 20, 2014

Beyond the margin of error

From Twitter, this morning (the graphs aren’t in the online story)

Now, the Herald-Digipoll is supposed to be a real survey, with samples that are more or less representative after weighting. There isn’t a margin of error reported, but the standard maximum margin of error would be  a little over 6%.

There are two aspects of the data that make it not look representative. Thr first is that only 31.3%, or 37% of those claiming to have voted, said they voted for Len Brown last time. He got 47.8% of the vote. That discrepancy is a bit larger than you’d expect just from bad luck; it’s the sort of thing you’d expect to see about 1 or 2 times in 1000 by chance.

More impressively, 85% of respondents claimed to have voted. Only 36% of those eligible in Auckland actually voted. The standard polling margin of error is ‘two sigma’, twice the standard deviation.  We’ve seen the physicists talk about ’5 sigma’ or ’7 sigma’ discrepancies as strong evidence for new phenomena, and the operations management people talk about ‘six sigma’ with the goal of essentially ruling out defects due to unmanaged variability.  When the population value is 36% and the observed value is 85%, that’s a 16 sigma discrepancy.

The text of the story says ‘Auckland voters’, not ‘Aucklanders’, so I checked to make sure it wasn’t just that 12.4% of the people voted in the election but didn’t vote for mayor. That explanation doesn’t seem to work either: only 2.5% of mayoral ballots were blank or informal. It doesn’t work if you assume the sample was people who voted in the last national election.  Digipoll are a respectable polling company, which is why I find it hard to believe there isn’t a simple explanation, but if so it isn’t in the Herald story. I’m a bit handicapped by the fact that the University of Texas internet system bizarrely decides to block the Digipoll website.

So, how could the poll be so badly wrong? It’s unlikely to just be due to bad sampling — you could do better with a random poll of half a dozen people. There’s got to be a fairly significant contribution from people whose recall of the 2013 election is not entirely accurate, or to put it more bluntly, some of the respondents were telling porkies.  Unfortunately, that makes it hard to tell if results for any of the other questions bear even the slightest relationship to the truth.

 

 

 

March 18, 2014

Three fifths of five eighths of not very much at all

The latest BNZ-REINZ Residential Market Survey is out, and the Herald has even embedded the full document in their online story, which is a very promising change.

According to the report 6.4% of homes sales in March are  to off-shore buyers, 25% of whom were Chinese. 25% of 6.4% is 1.6%.

If you look at real estate statistics (eg, here) for last month you find 6125 residential sales through agents across NZ. 25% of 6.4% of 6125 is 98. That’s not a very big number.  For context, in the most recent month available, about 1500 new dwellings were consented.

You also find, looking at the real estate statistics, that last month was February, not March.  The  BNZ-REINZ Residential Market Survey is not an actual measurement, the estimates are averages of round numbers based on the opinion of real-estate agents across the country.  Even if we assume the agents know which buyers are offshore investors as opposed to recent or near-future immigrants (they estimate 41% of the foreign buyers will move here), it’s pretty rough data. To make it worse, the question on this topic just changed, so trends are even harder to establish.

That’s probably why the report said in the front-page summary “one would struggle, statistically-speaking, to conclude there is a lift or decline in foreign buying of NZ houses.”

The Herald  boldly took up that struggle.

March 4, 2014

What you don’t know

The previous post was about the failure of the ‘deficiency model’ of communication, which can be caricatured as the idea that people who believe incorrect things just need knowledge pills.

Sometimes, though, information does help. A popular example is that providing information to university students about the actual frequency of binge drinking, drug use, etc, can reduce their perception that ‘everyone is doing it’ and reduce actual risky  behaviour.

So, it’s interesting to see these results from a US survey about same-sex marriage

Regular churchgoers (those who attend at least once or twice a month), particularly those who belong to religious groups that are supportive of same-sex marriage, are likely to over- estimate opposition for same-sex marriage in their churches by 20 percentage points or more.

  • „„About 6-in-10 (59%) white mainline Protestants believe their fellow congregants are mostly opposed to same-sex marriage. However, among white mainline Protestants who attend church regularly, only 36% oppose allowing gay and lesbian people to legally marry while a majority (57%) actually favor this policy.
  • Roughly three-quarters (73%) of Catholics believe that most of their fellow congregants are opposed to same-sex marriage. However, Catholics who regularly attend church are in fact divided on the issue (50% favor, 45% oppose).

For survey nerds, the sampling methodology and complete questionnaire are also linked from that web page.

February 16, 2014

Most young Americans think astronomy is science

And they’re right.

The problem is they don’t know the difference between the words “astronomy” and “astrology”. So we get survey results like this,

A study released by the National Science Foundation finds nearly half of all Americans feel astrology—the belief that there is a tie between astrological events and human experiences—is “very” or “sort of” scientific. Young adults are even more prone to believe, with 58% of 18- to 24-year-olds saying it is a science.

Richard N. Landers, a psychologist in Virginia, thought the name confusion might be responsible, and ran a survey using Amazon’s Mechanical Turk, where you pay people to do simple tasks.  He asked people to define astrology and then to say whether they thought it was scientific.

What he found, is shown in this graph based on his data

astro

 

People who think astrology is about horoscopes and predicting the future overwhelming don’t think it’s scientific — about 80% are in the ‘no’, and ‘hell, no’ categories. People who think astrology is about the solar system and stars  think it is pretty scientific or very scientific.

The data isn’t perfect — it’s from a much less representative sample than the NSF used — but there’s a very strong indication that confusion between “astronomy” and”astrology” could explain the otherwise depressing results of the NSF survey.

 

(via @juhasaarinen)

February 15, 2014

I only drink it for the pictures

Stuff has a story about differences in coffee preference between regions of NZ.

A customer survey published yesterday confirms the capital has the country’s biggest coffee snobs – almost three in four will choose an expensive brew over a less tasty one every time.

They don’t give enough information to work out how big the differences are, or how they compare to the uncertainty in the survey.

There’s slightly more information in the press release – still no uncertainty, but we are told that the figure is 72% for Wellington and 67% for the country as a whole. Not a terribly impressive difference, and almost certainly the survey isn’t large enough to be able to tell which region is really highest. Fortunately, it’s not a question where the true answer matters.

You have to go to the Canstar Blue site to find that the survey population isn’t Kiwis in general, or even coffee-drinkers, but people who have been to a chain coffee store at least once in the past six months.

Interestingly, although 17% to 33% of people (varying between regions) consider coffee alone to qualify as breakfast, only 10% to 14% say they drink coffee for the caffeine.

Yeah, right.

February 14, 2014

Interpreted with caution

There’s a new paper in The Lancet, summarising population-based surveys across the world that asked about non-partner sexual violence. The paper’s conclusion, from the abstract

Sexual violence against women is common worldwide, with endemic levels seen in some areas, although large variations between settings need to be interpreted with caution because of differences in data availability and levels of disclosure.

The story in Stuff has the headline Sexual assaults more than double world average, and starts

The rate of sexual assault in Australia and New Zealand is more than double the world average, according to a new report.

After several highly publicised rapes and murders of young women in India and South Africa, researchers from several countries  decided to review and estimate prevalence of sexual violence against women in 56 countries.

The results, published in the UK medical journal The Lancet, found that 7.2 per cent of women aged 15 years or older  reported being sexually assaulted by someone other than an intimate  partner at least once in their lives.

The study found that Australia and New Zealand has the third-highest rate, more than double the world average, with 16.4  per cent.

If you look at the raw numbers reported in the paper, they showed Australia/NZ at about ten times the rate of the Caribbean or southern Latin America or Eastern Europe, which is really not plausible. Statistical adjustment for differing types of survey reduced that margin, but as the researchers explicitly and carefully point out, a lot of the variation between regions could easily be due to variations in disclosure, and it suggests that rape is being underestimated in some areas.

As usual with extreme international comparisons, the headline is both probably wrong and missing the real point. The real point is that roughly one in six women in Australia & NZ report having experienced sexual violence.

January 17, 2014

Hard-to-survey populations

As we saw a few weeks ago with the unexpectedly high frequency of virgin births, it can be hard to measure rare things accurately in surveys, because rare errors will crowd out the true reports. It’s worse with teenagers, as a new paper from the Add Health study has reported. The paper is not open-access, but there’s a story in MedicalXpress.

So imagine the surprise and confusion when subsequent revisits to the same research subjects found more than 70 percent of the self-reported adolescent nonheterosexuals had somehow gone “straight” as older teens and young adults.

“We should have known something was amiss,” says Savin-Williams. “One clue was that most of the kids who first claimed to have artificial limbs (in the physical-health assessment) miraculously regrew arms and legs when researchers came back to interview them.”

This wasn’t just data-entry error, and it probably wasn’t a real change; after some careful analysis they conclude it was a mixture of genuine misunderstanding of the question (“romantic” vs “sexual” attraction), and, well, teenagers. Since not many teens are gay (a few percent), it doesn’t take many incorrect answers to swamp the real data.

It doesn’t matter so much that the number of gay and bisexual teenagers was overestimated. The real impact is on the analysis of health and social development in this population.  In Add Health and at least one other youth survey, according to the researchers, this sort of error has probably led to overestimating the mental and physical health problems of non-heterosexual teenagers.

January 14, 2014

Health food research marketing

The Herald has a story about better ways to present nutritional information on foods

“Our study found that those who were presented with the walking label were most likely to make healthier consumption choices, regardless of their level of preventive health behaviour,” Ms Bouton said.

“Therefore, consumers who reported to be unhealthier were likely to modify their current negative behaviour and exercise, select a healthier alternative or avoid the unhealthy product entirely when told they would need to briskly walk for one hour and 41 minutes to burn off the product.

“The traffic light system was found to be effective in deterring consumers from unhealthy foods, while also encouraging them to consume healthy products.”

This sounds good. And this is a randomised experiment, which is an excellent feature.

However, it’s just an online survey of 591 people, about a hypothetical product, so what it actually found was that the labelling system was effective in deterring people from saying they would buy unhealthy foods, encouraging them to say they would consume healthy products and made them more likely to say they would exercise. That’s not quite so good. It’s a lot easier to get people to say they are going to eat better, exercise more, and lose weight that to get them to actually do it.

Another interesting feature is that this new research has appeared on the Herald website before. In October 2012 there was a story based on the first 220 survey responses

Not only were people more likely to exercise when they saw such labels, they also felt more guilty, Ms Bouton said.

“My findings showed that the exercise labelling was significantly more effective in both chocolate and healthier muesli bars in encouraging consumers to exercise after consumption.

“It increased the likelihood of having higher feelings of guilt after consumption and was more likely to stop [the participant] consuming the chocolate bar with the exercise labelling.”

The 2012 story still didn’t raise the issue of what people said versus actual behaviour, but it did get an independent opinion, who pointed out that calories aren’t the only purpose of food labelling.

More importantly, the stories and the two press releases are all the information I could find online about the research. There don’t seem to be any more details either published or in an online report. It’s good to have stories about scientific research, and this sort of experiment is an important step in thinking about food labelling, but the stories are presenting stronger conclusions that can really be supported by a single unpublished online survey.

December 20, 2013

Best? Worst? It’s all just numbers

From the Herald, based on a survey by a human resource company that’s lost its shift key

“New Zealand lags behind other Asia-Pacific countries in wage equality”

From the Ministry of Women’s Affairs, based on the Quarterly Employment Survey

New Zealand’s gender pay gap is the equal lowest in the OECD (along with Ireland). The gender pay gap at 10.1 percent (2013) is the lowest in the Asia-Pacific region.

It’s not that Herald personnel don’t know about the Government figures: about a month ago they ran a story describing the small but worrying increase in the governments estimate of the pay gap.

Now, it could be that both these things are true — perhaps they define the pay gap in different ways, or maybe the gap is much larger in some industries (and, necessarily, smaller in others).  But if you’re going to run a dramatically different estimate of such an important national statistic, it would be helpful to explain why it is different and how it was estimated, and say something about the implications of the difference.

Especially as once you find the company on the web (quite hard, since their name is “font”), you will also find they run an online salary survey website that provides self-reported salaries for a self-selected sample.  I hope that isn’t where the gender pay gap information is coming from.