Posts filed under Denominator? (73)

April 9, 2016

Compared to what?

Two maps via Twitter:

From the Sydney Morning Herald, via @mlle_elle and @rpy


The differences in population density swamp anything else. For the map to be useful we’d need a comparison between ‘creative professionals’ and ‘non-creative unprofessionals’.  There’s an XKCD about this.

Peter Ellis has another visualisation of the last election that emphasises comparisons. Here’s a comparison of Green and Labour votes (by polling place) across Auckland.


There’s a clear division between the areas where Labour and Green polled about the same, and those where Labour did much better


January 15, 2016

When you don’t find any

The Icelandic Ethical Humanist Association commissioned a survey on religion. For people who don’t want to read the survey report (in PDF, in Icelandic), there’s a story at Iceland Magazine. The main point is in the headline: 0.0% of Icelanders 25 years or younger believe God created the world, new poll reveals.

That’s a pretty strong claim, so what did the survey actually do? Well, here you do need to read the survey report (or at least feed snippets of it to Google Translate). Of the people they sampled, 109 were in the lowest age category, which is ‘younger than 25’.  None of the 109  reported believing “God created the world” vs “The world was created in the Big Bang”.

Now, that’s not a completely clean pair of alternatives, since a fair number of people — the Pope, for example — say they believe both, but it’s still informative to some extent. So what can we say about sampling uncertainty?

A handy trick for situations like this one is the ‘rule of 3’.  If you ask N people and none of them is a creationist, a 95% confidence upper bound for the population proportion is 3/N. So, “fewer than 3% of Icelanders under 25 believe God created the world”

December 14, 2015

A sense of scale

It was front page news in the Dominion Post today that about 0.1% of registered teachers had been investigated for “possible misconduct or incompetence in which their psychological state may have been a factor.”  Over a six year period. And 5% of them (that is, 0.005% of all teachers) were struck off or suspended as a result.

Actually, the front page news was even worse than that:CWKJ22nUwAEguz2


but since the “mentally-ill” bit wasn’t even true, the online version has been edited.

Given the high prevalence of some of these psychological and neurological conditions and the lack of a comparison group, it’s not even clear that they increase the risk of being investigated or struck off . After all, an early StatsChat story was about a Dom Post claim that “hundreds of unfit teachers” were working in our schools, based on 664 complaints over two years.

It would be interesting to compare figures for, say, rugby players or journalists. Except that would be missing the other point.  As Jess McAllen writes at The Spinoff, the phrasing and placement of the story, especially the original one, is a clear message to anyone with depression, or anxiety, or ADHD. Anyone who wants to think about the children might think about what that message does for rather more than 0.1% of them.

(via @publicaddress)

November 15, 2015

Out of how many?

Stuff has a story under the headline ACC statistics show New Zealand’s riskiest industries. They don’t. They show the industries with the largest numbers of claims.

To see why that’s a problem, consider instead the number of claims by broad ethnicity grouping: 135000 for European, 23100 for Māori, 10800 for Pacific peoples(via StatsNZ). There’s no way that European ethnicity gives you a hugely greater risk of occupational injury than Māori or Pacific workers have. The difference between these groups is basically just population size. The true risks go in the opposite direction: 89 claims per 1000 full-time equivalent workers of European ethnicities, 97 for Māori, and 106 for Pacific.

With just the total claims we can’t tell whether working in supermarkets and grocery stores is really much more dangerous than logging, as the story suggests. I’m dubious, but.

August 5, 2015

What does 90% accuracy mean?

There was a lot of coverage yesterday about a potential new test for pancreatic cancer. 3News covered it, as did One News (but I don’t have a link). There’s a detailed report in the Guardian, which starts out:

A simple urine test that could help detect early-stage pancreatic cancer, potentially saving hundreds of lives, has been developed by scientists.

Researchers say they have identified three proteins which give an early warning of the disease, with more than 90% accuracy.

This is progress; pancreatic cancer is one of the diseases where there genuinely is a good prospect that early detection could improve treatment. The 90% accuracy, though, doesn’t mean what you probably think it means.

Here’s a graph showing how the error rate of the test changes with the numerical threshold used for diagnosis (figure 4, panel B, from the research paper)


As you move from left to right the threshold decreases; the test is more sensitive (picks up more of the true cases), but less specific (diagnoses more people who really don’t have cancer). The area under this curve is a simple summary of test accuracy, and that’s where the 90% number came from.  At what the researchers decided was the optimal threshold, the test correctly reported 82% of early-stage pancreatic cancers, but falsely reported a positive result in 11% of healthy subjects.  These figures are from the set of people whose data was used in putting the test together; in a new set of people (“validation dataset”) the error rate was very slightly worse.

The research was done with an approximately equal number of healthy people and people with early-stage pancreatic cancer. They did it that way because that gives the most information about the test for given number of people.  It’s reasonable to hope that the area under the curve, and the sensitivity and specificity of the test will be the same in the general population. Even so, the accuracy (in the non-technical meaning of the word) won’t be.

When you give this test to people in the general population, nearly all of them will not have pancreatic cancer. I don’t have NZ data, but in the UK the current annual rate of new cases goes from 4 people out of 100,000 at age 40 to 100 out of 100,000 people 85+.   The average over all ages is 13 cases per 100,000 people per year.

If 100,000 people are given the test and 13 have early-stage pancreatic cancer, about 10 or 11 of the 13 cases will have positive tests, but so will 11,000 healthy people.  Of those who test positive, 99.9% will not have pancreatic cancer.  This might still be useful, but it’s not what most people would think of as 90% accuracy.


August 2, 2015

Pie chart of the week

A year-old pie chart describing Google+ users. On the right are two slices that would make up a valid but pointless pie chart: their denominator is Google+ users. On the left, two slices that have completely different denominators: all marketers and all Fortune Global 100 companies.

On top of that, it’s unlikely that the yellow slice is correct, since it’s not clear what the relevant denominator even is. And, of course, though most of the marketers probably identify as male or female, it’s not clear how the Fortune Global 100 Companies would report their gender.


From @NoahSlater, via @LewSOS, originally from kwikturnmedia about 18 months ago.

July 31, 2015

Doesn’t add up

Daniel Croft nominated a story on savings from switching power companies for Stat of the Week.  The story says

The latest Electricity Authority figures show 2.1 million consumers have switched providers since 2010, saving $164 on average for the year. In 2014, 385,596 households switched over, collectively saving $281 million.

and he argues that this level of saving without any real harm to the industry shows there was serious overcharging.  It turns out that there’s another reason the story is relevant to StatsChat. The savings number is wrong, and this is clear based on other numbers in the story.

A basic rule of numbers in journalism is that if you have two numbers, you can usually do arithmetic on them for some basic fact-checking.  Dividing $281 million by 385,596 gives an average saving of over $700 per switching household. I find that a bit hard to believe — it’s a lot bigger than the ads for suggest.

Looking at the end of the story, we can see average savings for people who switched in each region of New Zealand.  The highest is $318 for Bay of Plenty. It’s not possible for the national average to be more than twice the highest regional average. The numbers are wrong somewhere.

We can compare with the Electricity Authority report, which is supposed to be the source of the numbers.  The number 281 appears once in the document (ctrl-F is your friend):

If all households had switched to the cheapest deal in 2014 they collectively stood to save $281 million.

So, the $281 million total isn’t the estimated total saving for the 385,596 households who actually switched, it’s the estimated total saving if everyone switched to the cheapest available option — in fact, if they switched every month to the cheapest available option that month — and if they didn’t use more electricity once it was cheaper, and if prices didn’t increase to compensate.

All the quoted savings numbers are like this, averages over all households if they switched to the cheapest option, everything else being equal, rather than data on the actual switches of actual households.


July 20, 2015

Pie chart of the day

From the Herald (squashed-trees version, via @economissive)


For comparison, a pie of those aged 65+ in NZ regardless of where they live, based on national population estimates:


Almost all the information in the pie is about population size; almost none is about where people live.

A pie chart isn’t a wonderful way to display any data, but it’s especially bad as a way to show relationships between variables. In this case, if you divide by the size of the population group, you find that the proportion in private dwellings is almost identical for 65-74 and 75-84, but about 20% lower for 85+.  That’s the real story in the data.

July 8, 2015

Stolen car statistics

Both the Herald and Stuff are covering the AA Insurance list of most-stolen car brands. They have both made it clear what the ranking on the list actually means – -what the denominator is:

“It’s not that there are more Honda Torneos on the road than any other car,” said AA Insurance customer relations manager Amelia Macandrew. “It’s the probability of them being stolen that’s far greater than any other car we insure.” (Stuff)


To calculate theft incidence rates, AA Insurance measures the number of claims made for each model of car for which 20 or more claims have been made, as a percentage of the total number of policies it holds for that model. (Herald)


It wasn’t as clear in past years: credit to the reporters and to AA Insurance for the improvement.


March 17, 2015

Bonus problems

If you hadn’t seen this graph yet, you probably would have soon.


The claim “Wall Street bonus were double the earnings of all full-time minimum wage workers in 2014” was made by the Institute for Policy Studies (which is where I got the graph) and fact-checked by the Upshot blog at the New York Times, so you’d expect it to be true, or at least true-ish. It probably isn’t, because the claim being checked was missing an important word and is using an unfortunate definition of another word. One of the first hints of a problem is the number of minimum wage workers: about a million, or about 2/3 of one percent of the labour force.  Given the usual narrative about the US and minimum-wage jobs, you’d expect this fraction to be higher.

The missing word is “federal”. The Bureau of Labor Statistics reports data on people paid at or below the federal minimum wage of $7.25/hour, but 29 states have higher minimum wages so their minimum-wage workers aren’t counted in this analysis. In most of these states the minimum is still under $8/hr. As a result, the proportion of hourly workers earning no more than federal minimum wage ranges from 1.2% in Oregon to 7.2% in Tennessee (PDF).  The full report — and even the report infographic — say “federal minimum wage”, but the graph above doesn’t, and neither does the graph from Mother Jones magazine (it even omits the numbers of people)

On top of those getting state minimum wage we’re still short quite a lot of people, because “full-time” is defined by 35 or more hours per week at your principal job.  If you have multiple part-time jobs, even if you work 60 or 80 hours a week, you are counted as part-time and not included in the graph.

Matt Levine writes:

There are about 167,800 people getting the bonuses, and about 1.03 million getting full-time minimum wage, which means that ballpark Wall Street bonuses are 12 times minimum wage. If the average bonus is half of total comp, a ratio I just made up, then that means that “Wall Street” pays, on average, 24 times minimum wage, or like $174 an hour, pre-tax. This is obviously not very scientific but that number seems plausible.

That’s slightly less scientific than the graph, but as he says, is plausible. In fact, it’s not as bad as I would have guessed.

What’s particularly upsetting is that you don’t need to exaggerate or use sloppy figures on this topic. It’s not even that controversial. Lots of people, even technocratic pro-growth economists, will tell you the US minimum wage is too low.  Lots of people will argue that Wall St extracts more money from the economy than it provides in actual value, with much better arguments than this.

By now you might think to check carefully that the original bar chart is at least drawn correctly.  It’s not. The blue bar is more than half the height of the red bar, not less than half.