Posts filed under Denominator? (75)

June 13, 2016

Reasonable grounds

Mark Hanna submitted an OIA request about strip searches in NZ prisons, which carried out with ‘reasonable grounds to believe’ the prisoner has an unauthorised item.  You can see the full response at FYI. He commented that 99.3% of these searches find nothing.

Here’s the monthly data over time:

searches
The positive predictive value of having ‘reasonable grounds’  is increasing, and is up to about 1.5% now. That’s still pretty low. How ‘reasonable’ it is depends on what proportion of the time people who aren’t searched have unauthorised items: if that were, say, 1 in 1000, having ‘reasonable grounds’ would be increasing it 5-15-fold, which might conceivably count as reasonable.

We can look at the number of searches conducted, to see if that tells us anything about trends
conducted
Again, there’s a little good news: the number of strip searches has fallen over the the past couple of years. That’s a real rise and fall — the prison population has been much more stable. The trend looks very much like the first trend upside down.

Here’s the trend for number (not proportion) of searches finding something
finds
It’s pretty much constant over time.

Statistical models confirm what the pictures suggest: the number of successful searches is essentially uncorrelated with the total number of searches. This is also basically good news (for the future, if not the past): it suggests that a further reduction in strip searches may well be possible at no extra risk.

May 29, 2016

I’ma let you finish

Adam Feldman runs the blog Empirical SCOTUS, with analyses of data on the Supreme Court of the United States. He has a recent post (via Mother Jones) showing how often each judge was interrupted by other judges last year:

Interrupted

For those of you who don’t follow this in detail, Elena Kagan and Sonia Sotomayor are women.

Looking at the other end of the graph, though, shows something that hasn’t been taken into account. Clarence Thomas wasn’t interrupted at all. That’s not primarily because he’s a man; it’s primarily because he almost never says anything.

Interpreting the interruptions really needs some denominator. Fortunately, we have denominators. Adam Feldman wrote another post about them.

Here’s the number interruptions per 1000 words, with the judges sorted in order of  how much they speak

perword

And here’s the same thing with interruption per 100 ‘utterances’

perutterance

It’s still pretty clear that the female judges are interrupted more often (yes, this is statistically significant (though not very)). Taking the amount of speech into account makes the differences smaller, but, interestingly, also shows that Ruth Bader Ginsburg is interrupted relatively often.

Denominators do matter.

April 9, 2016

Compared to what?

Two maps via Twitter:

From the Sydney Morning Herald, via @mlle_elle and @rpy

creativemap

The differences in population density swamp anything else. For the map to be useful we’d need a comparison between ‘creative professionals’ and ‘non-creative unprofessionals’.  There’s an XKCD about this.

Peter Ellis has another visualisation of the last election that emphasises comparisons. Here’s a comparison of Green and Labour votes (by polling place) across Auckland.

votemap

There’s a clear division between the areas where Labour and Green polled about the same, and those where Labour did much better

 

January 15, 2016

When you don’t find any

The Icelandic Ethical Humanist Association commissioned a survey on religion. For people who don’t want to read the survey report (in PDF, in Icelandic), there’s a story at Iceland Magazine. The main point is in the headline: 0.0% of Icelanders 25 years or younger believe God created the world, new poll reveals.

That’s a pretty strong claim, so what did the survey actually do? Well, here you do need to read the survey report (or at least feed snippets of it to Google Translate). Of the people they sampled, 109 were in the lowest age category, which is ‘younger than 25’.  None of the 109  reported believing “God created the world” vs “The world was created in the Big Bang”.

Now, that’s not a completely clean pair of alternatives, since a fair number of people — the Pope, for example — say they believe both, but it’s still informative to some extent. So what can we say about sampling uncertainty?

A handy trick for situations like this one is the ‘rule of 3’.  If you ask N people and none of them is a creationist, a 95% confidence upper bound for the population proportion is 3/N. So, “fewer than 3% of Icelanders under 25 believe God created the world”

December 14, 2015

A sense of scale

It was front page news in the Dominion Post today that about 0.1% of registered teachers had been investigated for “possible misconduct or incompetence in which their psychological state may have been a factor.”  Over a six year period. And 5% of them (that is, 0.005% of all teachers) were struck off or suspended as a result.

Actually, the front page news was even worse than that:CWKJ22nUwAEguz2

 

but since the “mentally-ill” bit wasn’t even true, the online version has been edited.

Given the high prevalence of some of these psychological and neurological conditions and the lack of a comparison group, it’s not even clear that they increase the risk of being investigated or struck off . After all, an early StatsChat story was about a Dom Post claim that “hundreds of unfit teachers” were working in our schools, based on 664 complaints over two years.

It would be interesting to compare figures for, say, rugby players or journalists. Except that would be missing the other point.  As Jess McAllen writes at The Spinoff, the phrasing and placement of the story, especially the original one, is a clear message to anyone with depression, or anxiety, or ADHD. Anyone who wants to think about the children might think about what that message does for rather more than 0.1% of them.

(via @publicaddress)

November 15, 2015

Out of how many?

Stuff has a story under the headline ACC statistics show New Zealand’s riskiest industries. They don’t. They show the industries with the largest numbers of claims.

To see why that’s a problem, consider instead the number of claims by broad ethnicity grouping: 135000 for European, 23100 for Māori, 10800 for Pacific peoples(via StatsNZ). There’s no way that European ethnicity gives you a hugely greater risk of occupational injury than Māori or Pacific workers have. The difference between these groups is basically just population size. The true risks go in the opposite direction: 89 claims per 1000 full-time equivalent workers of European ethnicities, 97 for Māori, and 106 for Pacific.

With just the total claims we can’t tell whether working in supermarkets and grocery stores is really much more dangerous than logging, as the story suggests. I’m dubious, but.

August 5, 2015

What does 90% accuracy mean?

There was a lot of coverage yesterday about a potential new test for pancreatic cancer. 3News covered it, as did One News (but I don’t have a link). There’s a detailed report in the Guardian, which starts out:

A simple urine test that could help detect early-stage pancreatic cancer, potentially saving hundreds of lives, has been developed by scientists.

Researchers say they have identified three proteins which give an early warning of the disease, with more than 90% accuracy.

This is progress; pancreatic cancer is one of the diseases where there genuinely is a good prospect that early detection could improve treatment. The 90% accuracy, though, doesn’t mean what you probably think it means.

Here’s a graph showing how the error rate of the test changes with the numerical threshold used for diagnosis (figure 4, panel B, from the research paper)

pancreatic

As you move from left to right the threshold decreases; the test is more sensitive (picks up more of the true cases), but less specific (diagnoses more people who really don’t have cancer). The area under this curve is a simple summary of test accuracy, and that’s where the 90% number came from.  At what the researchers decided was the optimal threshold, the test correctly reported 82% of early-stage pancreatic cancers, but falsely reported a positive result in 11% of healthy subjects.  These figures are from the set of people whose data was used in putting the test together; in a new set of people (“validation dataset”) the error rate was very slightly worse.

The research was done with an approximately equal number of healthy people and people with early-stage pancreatic cancer. They did it that way because that gives the most information about the test for given number of people.  It’s reasonable to hope that the area under the curve, and the sensitivity and specificity of the test will be the same in the general population. Even so, the accuracy (in the non-technical meaning of the word) won’t be.

When you give this test to people in the general population, nearly all of them will not have pancreatic cancer. I don’t have NZ data, but in the UK the current annual rate of new cases goes from 4 people out of 100,000 at age 40 to 100 out of 100,000 people 85+.   The average over all ages is 13 cases per 100,000 people per year.

If 100,000 people are given the test and 13 have early-stage pancreatic cancer, about 10 or 11 of the 13 cases will have positive tests, but so will 11,000 healthy people.  Of those who test positive, 99.9% will not have pancreatic cancer.  This might still be useful, but it’s not what most people would think of as 90% accuracy.

 

August 2, 2015

Pie chart of the week

A year-old pie chart describing Google+ users. On the right are two slices that would make up a valid but pointless pie chart: their denominator is Google+ users. On the left, two slices that have completely different denominators: all marketers and all Fortune Global 100 companies.

On top of that, it’s unlikely that the yellow slice is correct, since it’s not clear what the relevant denominator even is. And, of course, though most of the marketers probably identify as male or female, it’s not clear how the Fortune Global 100 Companies would report their gender.

CLW5t4PWUAAvPNp

From @NoahSlater, via @LewSOS, originally from kwikturnmedia about 18 months ago.

July 31, 2015

Doesn’t add up

Daniel Croft nominated a story on savings from switching power companies for Stat of the Week.  The story says

The latest Electricity Authority figures show 2.1 million consumers have switched providers since 2010, saving $164 on average for the year. In 2014, 385,596 households switched over, collectively saving $281 million.

and he argues that this level of saving without any real harm to the industry shows there was serious overcharging.  It turns out that there’s another reason the story is relevant to StatsChat. The savings number is wrong, and this is clear based on other numbers in the story.

A basic rule of numbers in journalism is that if you have two numbers, you can usually do arithmetic on them for some basic fact-checking.  Dividing $281 million by 385,596 gives an average saving of over $700 per switching household. I find that a bit hard to believe — it’s a lot bigger than the ads for whatsmynumber.org.nz suggest.

Looking at the end of the story, we can see average savings for people who switched in each region of New Zealand.  The highest is $318 for Bay of Plenty. It’s not possible for the national average to be more than twice the highest regional average. The numbers are wrong somewhere.

We can compare with the Electricity Authority report, which is supposed to be the source of the numbers.  The number 281 appears once in the document (ctrl-F is your friend):

If all households had switched to the cheapest deal in 2014 they collectively stood to save $281 million.

So, the $281 million total isn’t the estimated total saving for the 385,596 households who actually switched, it’s the estimated total saving if everyone switched to the cheapest available option — in fact, if they switched every month to the cheapest available option that month — and if they didn’t use more electricity once it was cheaper, and if prices didn’t increase to compensate.

All the quoted savings numbers are like this, averages over all households if they switched to the cheapest option, everything else being equal, rather than data on the actual switches of actual households.

 

July 20, 2015

Pie chart of the day

From the Herald (squashed-trees version, via @economissive)

CKUXF6iUsAAWPr-

For comparison, a pie of those aged 65+ in NZ regardless of where they live, based on national population estimates:

CKUbriWVAAAWbpT

Almost all the information in the pie is about population size; almost none is about where people live.

A pie chart isn’t a wonderful way to display any data, but it’s especially bad as a way to show relationships between variables. In this case, if you divide by the size of the population group, you find that the proportion in private dwellings is almost identical for 65-74 and 75-84, but about 20% lower for 85+.  That’s the real story in the data.