Posts filed under Risk (141)

December 11, 2014

Very like a whale

We see patterns everywhere, whether they are there or not. This gives us conspiracy theories, superstition, and homeopathy. It’s really hard to avoid drawing conclusions about patterns, even when you know they aren’t really there.

Some of the most dramatic examples are visual

HAMLET
Do you see yonder cloud that’s almost in shape of a camel?

LORD POLONIUS
By the mass, and ’tis like a camel, indeed.

HAMLET
Methinks it is like a weasel.

LORD POLONIUS
It is backed like a weasel.

HAMLET
Or like a whale?

LORD POLONIUS
Very like a whale.

Hamlet was probably trolling, but he got away with it because seeing shapes in the clouds is a common experience.

Just as we’re primed to see causal relationships whether they are there or not, we are also primed to recognise shapes whether they are there or not. The compulsion is perhaps strongest for faces, as in this bitter melon (karela) from Reddit

ByuOkJuIIAARd-z

and this badasss mop

BZs7gHTIcAAHAuC

It turns out that computers can be taught similar illusions, according to new research from the University of Wyoming.  The researchers took software that had been trained to recognise certain images. They then started off with random video snow or other junk patterns and made repeated random changes, evolving images that the computer would recognise.

B4aMt24CcAAOeZv

These are, in a sense, computer optical illusions. We can’t see them, but they are very convincing to a particular set of artificial neural networks.

There are two points to this. The first is that when you see a really obvious pattern it isn’t necessarily there. The second is that even if computers are trained to classify a particular set of examples accurately, they needn’t do very well on completely different sets of examples.

In this case the computer was looking for robins and pandas, but it might also have been trained to look for credit card fraud or terrorists.

 

December 7, 2014

Bot or Not?

Turing had the Imitation Game, Phillip K. Dick had the Voight-Kampff Test, and spammers gave us the CAPTCHA.  The Truthy project at Indiana University has BotOrNot, which is supposed to distinguish real people on Twitter from automated accounts, ‘bots’, using analysis of their language, their social networks, and their retweeting behaviour. BotOrNot seems to sort of work, but not as well as you might expect.

@NZquake, a very obvious bot that tweets earthquake information from GeoNet, is rated at an 18% chance of being a bot.  Siouxsie Wiles, for whom there is pretty strong evidence of existence as a real person, has a 29% chance of being a bot.  I’ve got a 37% chance, the same as @fly_papers, which is a bot that tweets the titles of research papers about fruit flies, and slightly higher than @statschat, the bot that tweets StatsChat post links,  or @redscarebot, which replies to tweets that include ‘communist’ or ‘socialist’. Other people at a similar probability include Winston Peters, Metiria Turei, and Nicola Gaston (President of the NZ Association of Scientists).

PicPedant, the twitter account of the tireless Paulo Ordoveza, who debunks fake photos and provides origins for uncredited ones, rates at 44% bot probability, but obviously isn’t.  Ben Atkinson, a Canadian economist and StatsChat reader, has a 51% probability, and our only Prime Minister (or his twitterwallah), @johnkeypm, has a 60% probability.

 

November 28, 2014

Speed, crashes, and tolerances

The police think the speed tolerance change last year worked

Last year’s Safer Summer campaign introduced a speed tolerance of 4km/h above the speed limit for all of December and January, rather than just over the Christmas and New Year period. Police reported a 36 per cent decrease in drivers exceeding the speed limit by 1-10km/h and a 45 per cent decrease for speeding in excess of 10km/h.

Fatal crashes decreased by 22 per cent over the summer campaign. Serious injury crashes decreased by 8 per cent.

According to data from the NZTA Crash Analysis System, ‘driving too fast for the conditions’ was one of the contributing factors in about 20% of serious injury crashes and 30% of fatal crashes over the past seven years. The reductions in crashes seem more than you’d expect from those reductions in speeding.

So, I decided to look at the reduction in crashes where speed was a contributing factor, according to the Crash Analysis System data.

Here’s the trend for December and January, with the four lines showing all crashes where speed was a factor, those with any injury, those with a severe or fatal injury, and those with a fatality. The reduced-tolerance campaign was active for the last time period, December 2013 and January 2014. It looks as though the trend over years is pretty consistent.

during

 

For comparison, here’s the trend in November and February, when there wasn’t a campaign running, again showing crashes where speed was listed in the database as a contributing cause, and with the four lines giving all, injury, severe or fatal, and fatal.

notduring

There really isn’t much sign that the trend was different last summer from recent years, or that the decrease was bigger in the months that had the campaign.  The trend of fewer crashes and fewer deaths has been going on for some time. Decreases in speeding are part of it, and the police have surely played an important role. That’s the context for assessing any new campaign: unless you have some reason to think last year was especially bad and the decrease would have stopped without the zero-tolerance policy, there isn’t much sign of an impact in the data.

The zero tolerance could be a permanent part of road policing, Mr Bush said.

“We’ll assess that at the end of the campaign, but I can’t see us changing our approach on that.”

No, I can’t either.

November 16, 2014

John Oliver on the lottery

When statisticians get quoted on the lottery it’s pretty boring, even if we can stop ourselves mentioning the Optional Stopping Theorem.

This week, though, John Oliver took on the US state lotteries: “..,more than Americans spent on movie tickets, music, porn, the NFL, Major League Baseball, and video games combined. “

(you might also look at David Fisher’s Herald stories on the lottery)

November 7, 2014

What overdiagnosis looks like

An article in the New England Journal of Medicine talks about screening for thyroid cancer in South Korea. There has been a massive increase in diagnosis, mostly of very small tumours that are probably harmless — there was been no change in the thyroid cancer deaths.

thyroid

As the authors say:

Thyroid-cancer surgery has substantial consequences for patients. Most must receive lifelong thyroid-replacement therapy, and a few have complications from the procedure. An analysis of insurance claims for more than 15,000 Koreans who underwent surgery showed that 11% had hypoparathyroidism and 2% had vocal-cord paralysis.

 

November 3, 2014

It’s warmer out there

Following a discussion on Twitter this morning, I thought I’d write again about increasing global temperatures, and also about the types of probability statements.

The Berkeley Earth Surface Temperature project is the most straightforward source for conclusions about warming in the recent past. The project was founded by Richard Muller, a physicist who was concerned about the treatment of the raw temperature measurements in some climate projections. At one point, there was a valid concern that the increasing average temperatures could be some sort of statistical artefact based on city growth (‘urban heat island’) or on the different spatial distribution and accuracy of recent and older monitors. This turned out not to be the case. Temperatures are increasing, systematically.  The Berkeley Earth estimate agrees very well with the NASA, NOAA, and Hadley/CRU estimates for recent decades

best

The grey band around the curve is also important. This is the random error. There basically isn’t any.  To be precise, for recent years, the difference between current and average temperatures is 20 to 40 times the uncertainty — compare this to the 5σ used in particle physics.

What there is uncertainty about is the future (where prediction is hard), and the causal processes involved. That’s not to say it’s a complete free-for-all. The broad global trends fit very well to a simple model based on CO2 concentration plus the cooling effects of major volcanic eruptions, but the detail is hard to predict.

Berkeley Earth has a page comparing reconstructions of  temperatures with actual data for many climate models.  The models in the last major IPCC assessment report show a fairly wide band of prediction uncertainty — implying that future temperatures are more uncertain than current temperatures. The lines still all go up, but by varying amounts.

best-gcm

 

The same page has a detailed comparison of the regional accuracy of the models used in the new IPCC report. The overall trend is clear, but none of the models is uniformly accurate. That’s where the uncertainty comes from in the IPCC statements.

The earth has warmed, and as the oceans catch up there will be sea levels rises. That’s current data, without any forecasting.  There’s basically no uncertainty there.

It’s extremely likely that the warming will continue, and very likely that it is predominantly due to human-driven emissions of greenhouses gases.

We don’t know accurately how much warming there will be, or exactly how it will be distributed.  That’s not an argument against acting. The short-term and medium-term harm of climate changes increases faster than linearly with the temperature (4 degrees is much worse than 2 degrees, not twice as bad), which means the expected benefit of doing something to fix it is greater than if we had the same average prediction with zero uncertainty.

October 30, 2014

Cocoa puff

Both Stuff and the Herald have stories about the recent cocoa flavanols research (the Herald got theirs from the Independent).

Stuff’s story starts out

Remember to eat chocolate because it might just save your memory. This is the message of a new study, by Columbia University Medical Centre.

 

Sixteen paragraphs later, though, it turns out this isn’t the message

“The supplement used in this study was specially formulated from cocoa beans, so people shouldn’t take this as a sign to stock up on chocolate bars,” said Dr Simon Ridley, Head of Research at Alzheimer’s Research UK.

 

There’s a lot of variation in flavanol concentrations even in dark chocolate, but 900mg of flavanols would be somewhere between 150g and 1kg of dark chocolate per day.  Ordinary cocoa powder is also not going to provide 900mg at any reasonable consumption level.

The Herald story is much less over the top. They also quote in more detail the cautious expert comments and give less space to the positive ones. For example, that the study was very small and very short, and the improvement in memory was just in one measure of speed of very-short-term recall from a visual prompt, or that this measure was chosen because they expected it to be affected by cocoa rather than because of its relevance to everyday life. There was another memory test in the study, arguably a more relevant one, which was not expected to improve and didn’t.

Neither story mentions that the randomised trial also evaluated an exercise program that the researchers expected to be effective but wasn’t. Taking that into account, the statistical evidence for the effect of flavanols is not all that strong.

October 28, 2014

Absolute, relative, correlation, cause

The conclusions of a recent research paper

Delivery by [caesarean section] is associated with a modest increased odds of [autism], and possibly ADHD, when compared to vaginal delivery. Although the effect may be due to residual confounding, the current and accelerating rate of[caesarean section] implies that even a small increase in the odds of disorders, such as [autism] or ADHD, may have a large impact on the society as a whole. This warrants further investigation.

The Herald

Babies born through Caesarean section are more likely to develop autism, a new study says.

Academics warn the increasingly popular C-section deliveries heighten the risk of the disorder by 23 per cent.

There’s a fairly clear difference in language: the news story is fairly clearly implying that caesarean sections cause autism; the research paper is being scrupulously careful not to say that.

Using a relative risk is convenient in technical communication, but in non-technical communication makes the impact seem greater than it really is. The US Centers for Disease Control estimate a risk of 1 in 68 for autism spectrum disorder (there aren’t systematic NZ data).  If the correlation with C-section really is causal, we’re talking about roughly 14 kids with autism spectrum disorders per 1000 without a C-section and about 17 per 1000 with a C-section. The absolute risk increase, if it’s real, is about 3 cases per 1000 C-sections.

It’s also important to be clear that this correlation cannot explain much of the recent increases in autism. A relative risk of 1.23 means that if we went from no C-sections to 100% C-sections there would be a 23% increase in autism spectrum disorder. The observed increase is about five times that, and since  C-sections have only increased about 10 percentage points, not 100 percentage points, the observed increase in autism is about 50 times what this correlation could explain.

There are (I’m told by people who know the issues) good reasons to think there are too many C-sections.  This probably won’t be one of the most important ones.

 

October 22, 2014

Screening the elderly

I’ve seen two proposals recently for population screening of older people. They’re probably both not good ideas, but for different reasons.

We had a Stat of the Week nomination for a proposal to screen people over 65 for depression at ordinary GP visits, to prevent suicide. The proposal was based on the fact that 70% of the suicides were in people who had visited a GP within the past month.  If the average person over 65 visits a GP less than about 8.5 times a year, this means those visiting their GP are at higher risk.  However, the risk is still very small: 225 over 5.5 years is 41/year, 70% of that is 29/year.

To identify those 29, it would be necessary to administer the screening question to a lot of people, at least hundreds of thousands. That in itself is costly; more importantly, since the questionnaire will not be perfectly accurate there will be  tens of thousands of positive results. For example, a US randomised trial of depression screening in people over 60 recruited 600 participants from 9000 people screened. In the ‘usual care’ half of the trial there were 3 completed suicides over the next two years; in those receiving more intensive and focused help with depression there were 2. The trial suggests that screening and intensive intervention does help with symptoms of major depression (probably at substantial cost), but it’s not likely to be a feasible intervention to prevent suicide.

 

The other proposal is from the UK, where GPs will be financially rewarded for dementia diagnoses. In contrast to depression, dementia is pretty much untreatable. There’s nothing that modifies the course of the disease, and even the symptomatic treatments are of very marginal benefit.

The rationale for the proposal is that early diagnosis gives patients and their families more time to think about options and strategies. That could be of some benefit, at least in the subset of people with dementia who are able and willing to talk about it, but similar advance planning could be done — and perhaps better — without waiting for a diagnosis.

Diagnosis isn’t like treatment. As a British GP and blogger, Martin Brunet, points out

We are used to being paid for things of course, like asthma reviews and statin prescribing, and we are well aware of the problems this causes – but at least patients can opt out if they don’t like it.

They can refuse to attend a review, decline our offer of a statin or politely take the pill packet and store it unopened in the kitchen cupboard. They cannot opt out of a diagnosis.

 

September 26, 2014

Screening is harder than that

From the Herald

Calcium in the blood could provide an early warning of certain cancers, especially in men, research has shown.

Even slightly raised blood levels of calcium in men was associated with an increased risk of cancer diagnosis within one year.

The discovery, reported in the British Journal of Cancer, raises the prospect of a simple blood test to aid the early detection of cancer in high risk patients.

In fact, from the abstract of the research paper, 3% of people had high blood levels of calcium, and among those,  11.5% of the men developed cancer within a year. That’s really not strong enough prediction to be useful for early detection of cancer. For every thousand men tested you would find three cancer cases, and 27 false positives. What the research paper actually says under “Implications for clinical practice” is

“This study should help GPs investigate hypercalcaemia appropriately.”

That is, if a GP happens to measure blood calcium for some reason and notices that it’s abnormally high, cancer is one explanation worth checking out.

The overstatement is from a Bristol University press release, with the lead

High levels of calcium in blood, a condition known as hypercalcaemia, can be used by GPs as an early indication of certain types of cancer, according to a study by researchers from the universities of Bristol and Exeter.

and later on an explanation of why they are pushing this angle

The research is part of the Discovery Programme which aims to transform the diagnosis of cancer and prevent hundreds of unnecessary deaths each year. In partnership with NHS trusts and six Universities, a group of the UK’s leading researchers into primary care cancer diagnostics are working together in a five year programme.

While the story isn’t the Herald’s fault, using a photo of a man drinking a glass of milk is. The story isn’t about dietary calcium being bad, it’s about changes in the internal regulation of calcium levels in the blood, a completely different issue. Milk has nothing to do with it.