- From Felix Salmon: US population is increasing, and people are moving to the cities, so why is (sufficiently fine-scale) population density going down? Because rich people take up more space and fight for stricter zoning. You’ve heard of NIMBYs, but perhaps not of BANANAs
- From the New York Times. One of the big credit-rating companies is no longer using debts referred for collection as an indicator, as long as they end up paid. This isn’t a new spark of moral feeling, it’s just for better prediction.
- And from Felix Salmon again: Firstly, Americans are bad at statistics. When it comes to breast cancer, they massively overestimate the probability that early diagnosis and treatment will lead to a cure, while they also massively underestimate the probability that an undetected cancer will turn out to be harmless.
Posts filed under Denominator? (41)
The Herald has an interesting set of displays of the latest DigiPoll political opinion survey. According to the internets it was even worse earlier in the day, but we can pass over that and only point out that corrections in news stories shouldn’t happen silently (except perhaps for typos).
We can start with the standard complaint: the margin of error for the poll itself is 3.6%, so the margin of error for change since the last poll is 1.4 times higher, or a little over 5%. None of the changes is larger than 5%, and only one comes close.
Secondly, there is a big table for the minor parties. I would normally not quote the whole table, but in this case it’s already changed once today.
The total reported for the minor parties is 6.1%, and since there were 750 people sampled, 46 of them indicated support for one of these parties. That’s not really enough to split up over 7 parties. These 46 then get split up further, by age and gender. At this point, some of the sample proportions are zero, displayed as “-” for some reason.
[Updated to add: and why does the one male 40-64 yr old Aucklander who supported ACT not show up in the New Zealand total?]
Approximately 1 in 7 New Zealanders is 65+, so that should be about 6 or 7 minor-party supporters in the sample. That’s really not enough to estimate a split over 7 parties. Actually, the poll appears to have been lucky in recruiting older folks: it looks like 6 NZ First, 2 Conservative, 1 Mana.
That’s all pretty standard overtabulating, but the interesting and creative problems happen at the bottom of the page. There’s an interactive graph, done with the Tableau data exploration software. From what I’ve heard, Tableau is really popular in business statistics: it gives a nice clear interface to selecting groups of cells for comparison, dropping dimensions, and other worthwhile data exploration activities, and helps analysts present this sort of thing to non-technical managers.
However, the setup that the Herald have used appears to be intended for counts or totals, not for proportions. For example, if you click on April 2012, and select View Data, you get
which is unlikely to improve anyone’s understanding of the poll.
I like interactive graphics. I’ve put a lot of time and effort into making interactive graphics. I’ve linked to a lot of good interactive graphics on this blog. The Herald has the opportunity to show the usefulness of interactive graphics to a much wider community that I’ll ever manage. But not this way.
Last night, 3News had a scare story about positive drug tests at work. The web headline is “Report: More NZers working on drugs”, but that’s not what they had information on:
New figures reveal more New Zealanders were caught with drugs in their system at work last year.
…new figures from the New Zealand Drug Detection Agency reveal 4300 people tested positive for drugs at work last year.
The New Zealand Drug Detection Agency says employers are doing a better job of self-regulating. The agency performed almost 70,000 tests last year, 30 percent more than in 2011.
If 30% more were tested, you’d expect more to be positive. The story doesn’t say how many tested positive the previous year, but with the help of the Google, I found last year’s press release, which says
8% of men tested “non-negative” compared with 6% of women tested in 2011.
Now, 8% of 70000 is 5600, and even 6% of 70000 is 4200. Given that the majority of the tests are in men, it looks like the proportion testing positive went down this year.
The worst part of the story statistically is when they report changes in proportions of which drug was found as if this was meaningful. For example,
When it comes to industries, oil and gas had an 18 percent drop in positive tests for methamphetamine, but showed a marked increase in the use of opiates.
That’s an increase in the use of opiates as a proportion of those testing positive. Since proportions have to add up to 100%, a decrease in the proportion positive tests that are for methamphetamine has to come with an increase in some other set of drugs — just as a matter of arithmetic.
Stuff‘s story from January just as bad, with the lead
Employers are becoming more aware of the dangers of drugs and alcohol in the workplace as well as the benefits of testing for them.
and quoting an employer as saying
“And, we have no fear of an employee turning up to work and operating in an unsafe way, putting themselves and others at risk.”
as if occasional drug tests were the answer to all occupational health and safety problems.
The other interesting thing about the Stuff story is that it’s about a different organisation: Drug Testing Services, not NZ DDA — there’s more than one of them out there! You might easily have thought from the 3News story that the figures they quoted referred to all workplace drug tests in NZ, rather than just those sold by one company.
Given the claims being made, the evidence for either financial or safety benefits is amazingly weak. No-one in these stories even claims that introducing testing has actually reduced on-the-job accidents in their company, for example, let alone presents any data.
If you look on PubMed, the database of published medical research, there are lots of papers on new testing methods and reproducibility of test results, and a few that show people who have accidents are more likely than others to test positive. There’s very little even of before-after comparisons: a Cochrane review on this topic found three before-after comparisons. Two of the three found a small decrease in accident rates immediately after introducing testing; the third did not. A different two of the three found that the long-term decreasing trend in injuries got faster after introducing testing; again, the third did not. The review concluded that there was insufficient evidence to recommend for or against testing.
There’s better evidence for mandatory alcohol testing of truck drivers, but since those tests measure current blood alcohol concentrations, not past use, it doesn’t tell us much about other types of drug testing.
There are new reports, according to the Herald, that synthetic cannabinoids are ‘associated’ with suicidal tendencies in long-term users. One difficulty in evaluating this sort of data is the huge peak in suicide rates in young men. Almost anything you can think of that might be a bad idea is more commonly done by young men than by other people, so an apparent association isn’t all that surprising. There is also the problem with direction of causation — the sorts of problems that make suicide a risk might also increase drug use — and difficulties even in getting a reasonable estimate of the denominator, the number of people using the drug. Serious, rare effects of a recreational drug are the hardest to be sure about, and the same is true of prescription medications. It took big randomized trials to find out that Vioxx more than doubled your rate of heart attack , and a study of 1500 lung-cancer cases even to find the 20-fold increase in risk from smoking.
In this particular example there is additional supporting evidence. A few years back there was a lot of research into anti-cannabinoid drugs for weight loss (anti-munchies), and one of the things that sank these was an increase in suicidal thoughts in the patients in the early randomized trials. It’s quite plausible that the same effect would happen as a dose of the cannabinoid wears off.
In general, though, this is the sort of effect that the proposed testing scheme for psychoactive drugs will have difficulty finding, or ruling out.
The Herald has a story about a new app called TalkTo. Rather than you calling a business and waiting around for a possibly unhelpful response, you can text TalkTo and wait for them to call the business, ask your question and pass on the unhelpful response. Or, at least, you can if the business is in the USA or Canada — they currently wouldn’t handle Novapay or Qantas, the two examples in the story. The app obviously wouldn’t help for issues that require a dialogue, which includes essentially all the time I spend on hold.
Anyway, the statistics angle is that we apparently spend 43 days on hold during our lives. As a basic numeracy challenge: is this more than you expect or less?
The number comes from 20 minutes per week for 60 years, so it doesn’t apply to any actually existing people — 60 years ago, we didn’t have the same level of on-hold, and 60 years in the future there’s at least some hope that a larger fraction of businesses will figure out how to make a useful web page (or whatever the next communication technology but seven turns out to be).
There’s a report saying that NZ smartphone users are the 7th most at risk for attacks by cybercriminals. We could ask the usual questions about whether this survey is worth the paper it’s not written on, but this time those are left as an exercise for the reader [as is often the case, the last sentence of the Herald's story is especially informative].
An unusual problem with the ranking is
The ranking was based on the percentage of Android apps rated as high-risk over the total number of apps scanned per country.
The use of a percentage rather than a total here seems to make no sense. If you have a high-risk app on your phone, it doesn’t become low-risk just because you also have lots of other apps.
The ACC gives people money, and so has to balance the risks of fraud, rejection of valid claims, and red tape. We get a lot of stories claiming that ACC isn’t paying people it should pay, but today’s story is about the cost of fraud.
The cost is variously described as
- $10 million (over four years)
- $1.8 million-$3.5 million (per year)
- $35.9 million (actuarial cost)
- $131 million (total future cost if it isn’t stopped)
The $131 million and $10 million numbers don’t have much going for them. There’s no particular reason to give a total over four years; an average of $2.5 million/year would have been more helpful. The $131 million includes costs into the indefinite future, but makes them look as if they are just like costs incurred now. That’s the point of the actuarial estimate, to say something sensible about the current equivalent value of future expenses.
By an interesting coincidence, the $131 million figure exceeds the actuarial estimate by about the same ratio as the headline total exceeds the annual average.
The stories about the 20% cut in prices for international bandwidth from Southern Cross would be a lot more informative if they included some idea of how much it costs before and after, or, even better, roughly what fraction of the consumer price goes to international bandwidth. Even to the nearest 20% would be nice.
Otherwise we’re not getting much value added over the press release.
The Herald has a story about police being arrested: 67 over 2 2/3 years. That’s about 25 per year. The rate this year so far is a bit lower than in the previous two years, but well within the margin of error.
The Police Association says this proves we don’t need an independent complaints process, since the police already do a good job in catching their own when they stray. Of course, the figures don’t show anything of the sort, unless there is some independent reason to believe that 25/year is the number that should be arrested. The Police Association also says that many of the arrests would have ‘not guilty’ results. This could be true, and it’s a pity that neither the Herald nor the Police Association provided any actual information on convictions.
The Herald also quotes a survey from October, purporting to show that confidence in police has fallen. I covered this at the time. It doesn’t.
The arrest counts are not evidence one way or the other on how well the police are policed. They are easier to obtain than relevant evidence would be, but that’s not much consolation.
Stuff (The Sunday Star-Times) has a story about taser use statistics that illustrates the importance of the question ‘compared to what?’.
The paper tells us that nearly 1/3 of taser discharges have been at people considered by police to have mental health issues. Is that a lot? What proportion would you expect? If they know, they should be telling us, and if they don’t know, the statistic is pretty meaningless. We aren’t even told how the proportion is changing over time. We do know that these uses were explicitly contemplated when NZ Police did their 2006 trial of the taser “The taser can be used by Police when dealing with: unarmed (or lightly armed) but highly aggressive people, individuals displaying irrational or bizarre behaviour, and people under the influence of mind altering substances, solvents or alcohol.”
Later on we get more useful (though not new) information from analysis of the pilot period of Taser use, where the weapons were more likely to be discharged when police attended a mental health emergency (27% chance) than when they made a criminal arrest (10% chance). This is at least answering a meaningful and relevant question, and shows a large difference, though it’s still not clear how big a difference you would expect. Mental health emergencies get police involvement because they are emergencies; many criminal arrests are much more boring and routine.
The story quotes Judi Clements of the Mental Health Foundation as saying “Once you start giving that sort of weapon to police it’s highly likely it’s going to be used”. That’s a reasonable concern, but the numbers in the story, to the extent that you can interpret them, don’t really support it. There have been 212 taser discharges over two years, from between 600 and 900 tasers, or less than one discharge per five years per taser. We aren’t told anything about what rate of use would be appropriate, but 0.2 uses per taser-year doesn’t seem all that ‘highly likely’.
Further statistics released to the Sunday Star-Times under the Official Information Act show there are serious reliability issues around the weapons.
That’s based on lots of them needing repairs, not on failing to work, since
Despite the problems, only one weapon had failed to fire and administer a shock since they had been rolled out.
that is, 99.5% reliability in use.
It’s important for society to keep tabs on the use of tasers by law enforcement, especially because of the potential for misuse, but reporting of free-floating numbers doesn’t qualify.