Posts filed under Random variation (105)

December 7, 2014

Bot or Not?

Turing had the Imitation Game, Phillip K. Dick had the Voight-Kampff Test, and spammers gave us the CAPTCHA.  The Truthy project at Indiana University has BotOrNot, which is supposed to distinguish real people on Twitter from automated accounts, ‘bots’, using analysis of their language, their social networks, and their retweeting behaviour. BotOrNot seems to sort of work, but not as well as you might expect.

@NZquake, a very obvious bot that tweets earthquake information from GeoNet, is rated at an 18% chance of being a bot.  Siouxsie Wiles, for whom there is pretty strong evidence of existence as a real person, has a 29% chance of being a bot.  I’ve got a 37% chance, the same as @fly_papers, which is a bot that tweets the titles of research papers about fruit flies, and slightly higher than @statschat, the bot that tweets StatsChat post links,  or @redscarebot, which replies to tweets that include ‘communist’ or ‘socialist’. Other people at a similar probability include Winston Peters, Metiria Turei, and Nicola Gaston (President of the NZ Association of Scientists).

PicPedant, the twitter account of the tireless Paulo Ordoveza, who debunks fake photos and provides origins for uncredited ones, rates at 44% bot probability, but obviously isn’t.  Ben Atkinson, a Canadian economist and StatsChat reader, has a 51% probability, and our only Prime Minister (or his twitterwallah), @johnkeypm, has a 60% probability.


November 28, 2014

Speed, crashes, and tolerances

The police think the speed tolerance change last year worked

Last year’s Safer Summer campaign introduced a speed tolerance of 4km/h above the speed limit for all of December and January, rather than just over the Christmas and New Year period. Police reported a 36 per cent decrease in drivers exceeding the speed limit by 1-10km/h and a 45 per cent decrease for speeding in excess of 10km/h.

Fatal crashes decreased by 22 per cent over the summer campaign. Serious injury crashes decreased by 8 per cent.

According to data from the NZTA Crash Analysis System, ‘driving too fast for the conditions’ was one of the contributing factors in about 20% of serious injury crashes and 30% of fatal crashes over the past seven years. The reductions in crashes seem more than you’d expect from those reductions in speeding.

So, I decided to look at the reduction in crashes where speed was a contributing factor, according to the Crash Analysis System data.

Here’s the trend for December and January, with the four lines showing all crashes where speed was a factor, those with any injury, those with a severe or fatal injury, and those with a fatality. The reduced-tolerance campaign was active for the last time period, December 2013 and January 2014. It looks as though the trend over years is pretty consistent.



For comparison, here’s the trend in November and February, when there wasn’t a campaign running, again showing crashes where speed was listed in the database as a contributing cause, and with the four lines giving all, injury, severe or fatal, and fatal.


There really isn’t much sign that the trend was different last summer from recent years, or that the decrease was bigger in the months that had the campaign.  The trend of fewer crashes and fewer deaths has been going on for some time. Decreases in speeding are part of it, and the police have surely played an important role. That’s the context for assessing any new campaign: unless you have some reason to think last year was especially bad and the decrease would have stopped without the zero-tolerance policy, there isn’t much sign of an impact in the data.

The zero tolerance could be a permanent part of road policing, Mr Bush said.

“We’ll assess that at the end of the campaign, but I can’t see us changing our approach on that.”

No, I can’t either.

November 20, 2014

Round numbers

Nature doesn’t care about round numbers in base 10, but people do.  From @rcweir, via Amy Hogan, this is Twitter data of the number of people followed and following (truncated at 1000 to be readable). The number of people you follow is under your control, and there are clear peaks at multiples of 100 (and perhaps at multiples of 10 below 100). The number following you isn’t under your control, and there aren’t any similar patterns.



For a medical example, here are self-reported weights from the US National Health Interview Survey


The same thing happens with measured variables that are subject to operator error: blood pressure, for example, shows fairly strong digit preference unless a lot of care is taken in the measurement.

November 16, 2014

John Oliver on the lottery

When statisticians get quoted on the lottery it’s pretty boring, even if we can stop ourselves mentioning the Optional Stopping Theorem.

This week, though, John Oliver took on the US state lotteries: “..,more than Americans spent on movie tickets, music, porn, the NFL, Major League Baseball, and video games combined. “

(you might also look at David Fisher’s Herald stories on the lottery)

October 22, 2014

Screening the elderly

I’ve seen two proposals recently for population screening of older people. They’re probably both not good ideas, but for different reasons.

We had a Stat of the Week nomination for a proposal to screen people over 65 for depression at ordinary GP visits, to prevent suicide. The proposal was based on the fact that 70% of the suicides were in people who had visited a GP within the past month.  If the average person over 65 visits a GP less than about 8.5 times a year, this means those visiting their GP are at higher risk.  However, the risk is still very small: 225 over 5.5 years is 41/year, 70% of that is 29/year.

To identify those 29, it would be necessary to administer the screening question to a lot of people, at least hundreds of thousands. That in itself is costly; more importantly, since the questionnaire will not be perfectly accurate there will be  tens of thousands of positive results. For example, a US randomised trial of depression screening in people over 60 recruited 600 participants from 9000 people screened. In the ‘usual care’ half of the trial there were 3 completed suicides over the next two years; in those receiving more intensive and focused help with depression there were 2. The trial suggests that screening and intensive intervention does help with symptoms of major depression (probably at substantial cost), but it’s not likely to be a feasible intervention to prevent suicide.


The other proposal is from the UK, where GPs will be financially rewarded for dementia diagnoses. In contrast to depression, dementia is pretty much untreatable. There’s nothing that modifies the course of the disease, and even the symptomatic treatments are of very marginal benefit.

The rationale for the proposal is that early diagnosis gives patients and their families more time to think about options and strategies. That could be of some benefit, at least in the subset of people with dementia who are able and willing to talk about it, but similar advance planning could be done — and perhaps better — without waiting for a diagnosis.

Diagnosis isn’t like treatment. As a British GP and blogger, Martin Brunet, points out

We are used to being paid for things of course, like asthma reviews and statin prescribing, and we are well aware of the problems this causes – but at least patients can opt out if they don’t like it.

They can refuse to attend a review, decline our offer of a statin or politely take the pill packet and store it unopened in the kitchen cupboard. They cannot opt out of a diagnosis.


Infographic of the week

From the twitter of the Financial Times, “Interactive: who is the better goalscorer, Messi or Ronaldo?”

I assume on the FT site this actually is interactive, but since they have the world’s most effective paywall, I can’t really tell.

The distortion makes the bar graph harder to read, but it doesn’t matter much since the data are all there as numbers: the graph doesn’t play any important role in conveying the information. What’s strange is that the bent graph doesn’t really resemble any feature of a football pitch, which I  would have thought would be the point of distorting it.



The question of who has the highest-scoring season is fairly easy to read off, but the question of “who is the better goalscorer” is a bit more difficult. Based on the data here, you’d have to say it was too close to call, but presumably there’s other information that goes into putting Messi at the top of the ‘transfer value’ list at the site where the FT got the data.

(via @economissive)

September 26, 2014

Screening is harder than that

From the Herald

Calcium in the blood could provide an early warning of certain cancers, especially in men, research has shown.

Even slightly raised blood levels of calcium in men was associated with an increased risk of cancer diagnosis within one year.

The discovery, reported in the British Journal of Cancer, raises the prospect of a simple blood test to aid the early detection of cancer in high risk patients.

In fact, from the abstract of the research paper, 3% of people had high blood levels of calcium, and among those,  11.5% of the men developed cancer within a year. That’s really not strong enough prediction to be useful for early detection of cancer. For every thousand men tested you would find three cancer cases, and 27 false positives. What the research paper actually says under “Implications for clinical practice” is

“This study should help GPs investigate hypercalcaemia appropriately.”

That is, if a GP happens to measure blood calcium for some reason and notices that it’s abnormally high, cancer is one explanation worth checking out.

The overstatement is from a Bristol University press release, with the lead

High levels of calcium in blood, a condition known as hypercalcaemia, can be used by GPs as an early indication of certain types of cancer, according to a study by researchers from the universities of Bristol and Exeter.

and later on an explanation of why they are pushing this angle

The research is part of the Discovery Programme which aims to transform the diagnosis of cancer and prevent hundreds of unnecessary deaths each year. In partnership with NHS trusts and six Universities, a group of the UK’s leading researchers into primary care cancer diagnostics are working together in a five year programme.

While the story isn’t the Herald’s fault, using a photo of a man drinking a glass of milk is. The story isn’t about dietary calcium being bad, it’s about changes in the internal regulation of calcium levels in the blood, a completely different issue. Milk has nothing to do with it.

September 19, 2014

Not how polling works

The Herald interactive for election results looks really impressive. The headline infographic for the latest poll, not so much. The graph is designed to display changes between two polls, for which the margin of error is 1.4 times higher than in a single poll: the margin of error for National goes beyond the edge of the graph.



The lead for the story is worse

The Kim Dotcom-inspired event in Auckland’s Town Hall that was supposed to end John Key’s career gave the National Party an immediate bounce in support this week, according to polling for the last Herald DigiPoll survey.

Since both the Dotcom and Greenwald/Snowden Moments of Truth happened in the middle of polling, they’ve split the results into before/after Tuesday.  That is, rather than showing an average of polls, or even a single poll, or even a change from a single poll, they are headlining the change between the first and second halves of a single poll!

The observed “bounce” was 1.3%. The quoted margin of error at the bottom of the story is 3.5%, from a poll of 775 people. The actual margin of error for a change between the first and second halves of the poll is about 7%.

Only in the Internet Party’s wildest dreams could this split-half comparison have told us anything reliable. It would need the statistical equivalent of the CSI magic video-zoom enhance button to work.


August 16, 2014

Lotto and concrete implementation

There are lots of Lotto strategies based on trying to find patterns in numbers.

Lotto New Zealand televises its draws, and you can find some of them on YouTube.

If you have a strategy for numerological patterns in the Lotto draws, it might be a good idea to watch a few Lotto draws and ask yourself how the machine knows to follow your pattern.

If you’re just doing it for entertainment, go in good health.

July 30, 2014

If you can explain anything, it proves nothing

An excellent piece from sports site Grantland (via Brendan Nyhan), on finding explanations for random noise and regression to the mean.

As a demonstration, they took ten baseball batters and ten pitchers who had apparently improved over the season so far, and searched the internet for news that would allow them to find an explanation.  They got pretty good explanations for all twenty.  Looking at past seasons, this sort of short-term improvement almost always turns out be random noise, despite the convincing stories.

Having a good explanation for a trend feels like convincing evidence the trend is real. It feels that way to statisticians as well, but it isn’t true.

It’s traditional at this point to come up with evolutionary psychology explanations for why people are so good at over-interpreting trends, but I hope the circularity of that approach is obvious.