Posts filed under Risk (156)

February 27, 2015

Quake prediction: how good does it need to be?

From a detailed story in the ChCh Press, (via Eric Crampton) about various earthquake-prediction approaches

About 40 minutes before the quake began, the TEC in the ionosphere rose by about 8 per cent above expected levels. Somewhat perplexed, he looked back at the trend for other recent giant quakes, including the February 2010 magnitude 8.8 event in Chile and the December 2004 magnitude 9.1 quake in Sumatra. He found the same increase about the same time before the quakes occurred.

Heki says there has been considerable academic debate both supporting and opposing his research.

To have 40 minutes warning of a massive quake would be very useful indeed and could help save many lives. “So, why 40 minutes?” he says. “I just don’t know.”

He says if the link were to be proved more firmly in the future it could be a useful warning tool. However, there are drawbacks in that the correlation only appears to exist for the largest earthquakes, whereas big quakes of less than magnitude 8.0 are far more frequent and still cause death and devastation. Geomagnetic storms can also render the system impotent, with fluctuations in the total electron count masking any pre-quake signal.

Let’s suppose that with more research everything works out, and there is a rise in this TEC before all very large quakes. How much would this help in New Zealand? The obvious place is Wellington. A quake over 8.0 magnitude has been observed in the area in 1855, when it triggered a tsunami. A repeat would also shatter many of the earthquake-prone buildings. A 40-minute warning could save many lives. It appears that TEC shouldn’t be that expensive to measure: it’s based on observing the time delays in GPS satellite transmissions as they pass through the ionosphere, so it mostly needs a very accurate clock (in fact, NASA publishes TEC maps every five minutes). Also, it looks like it would be very hard to hack the ionosphere to force the alarm to go off. The real problem is accuracy.

The system will have false positives and false negatives. False negatives (missing a quake) aren’t too bad, since that’s where you are without the system. False positives are more of a problem. They come in two forms: when the alarm goes off completely in the absence of a quake, and when there is a quake but no tsunami or catastrophic damage.

Complete false predictions would need to be very rare. If you tell everyone to run for the hills and it turns out to be sunspots or the wrong kind of snow, they will not be happy: the cost in lost work (and theft?) would be substantial, and there would probably be injuries.  Partial false predictions, where there was a large quake but it was too far away or in the wrong direction to cause a tsunami, would be just as expensive but probably wouldn’t cause as much ill-feeling or skepticism about future warnings.

Now for the disappointment. The story says “there has been considerable academic debate”. There has. For example, in a (paywalled) paper from 2013 looking at the Japanese quake that prompted Heki’s idea

A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake.

In translation: you need to look just right to see this anomaly, and there are often anomalies like this one without quakes. Over four years they saw 24 anomalies, only one shortly before a quake.  Six complete false positives per year is obviously too many.  Suppose future research could refine what the signal looks like and reduce the false positives by a factor of ten: that’s still evacuation alarms with no quake more than once every two years. I’m pretty sure that’s still too many.

 

Siberian hamsters or Asian gerbils

Every year or so there is a news story along the lines of”Everything you know about the Black Death is Wrong”. I’ve just been reading a couple of excellent posts  by Alison Atkin on this year’s one.

The Herald’s version of the story (which they got from the Independent) is typical (but she has captured a large set of headlines)

The Black Death has always been bad publicity for rats, with the rodent widely blamed for killing millions of people across Europe by spreading the bubonic plague.

But it seems that the creature, in this case at least, has been unfairly maligned, as new research points the finger of blame at gerbils.

and

The scientists switched the blame from rat to gerbil after comparing tree-ring records from Europe with 7711 historical plague outbreaks.

That isn’t what the research paper (in PNAS) says. And it would be surprising if it did: could it really be true that Asian gerbils were spreading across Europe for centuries without anyone noticing?

The abstract of the paper says

The second plague pandemic in medieval Europe started with the Black Death epidemic of 1347–1353 and killed millions of people over a time span of four centuries. It is commonly thought that after its initial introduction from Asia, the disease persisted in Europe in rodent reservoirs until it eventually disappeared. Here, we show that climate-driven outbreaks of Yersinia pestis in Asian rodent plague reservoirs are significantly associated with new waves of plague arriving into Europe through its maritime trade network with Asia. This association strongly suggests that the bacterium was continuously reimported into Europe during the second plague pandemic, and offers an alternative explanation to putative European rodent reservoirs for how the disease could have persisted in Europe for so long.

If the researchers had found repeated, prevously unsuspected, invasions of Europe by hordes of gerbils, they would have said so in the abstract. They don’t. Not a gerbil to be seen.

The hypothesis is that plague was repeatedly re-imported from Asia (where affected a lots of species, including, yes, gerbils) to European rats, rather than persisting at low levels in European rats between the epidemics. Either way, once the epidemic got to Europe, it’s all about the rats [update: and other non-novel forms of transmission]

In this example, for a change, it doesn’t seem that the press release is responsible. Instead, it looks like progressive mutations in the story as it’s transmitted, with the great gerbil gradually going from an illustrative example of a plague host in Asia to the rodent version of Attila the Hun.

Two final remarks. First, the erroneous story is now in the Wikipedia entry for the great gerbil (with a citation to the PNAS paper, so it looks as if it’s real). Second, when the story is allegedly about the confusion between two species of rodent, it’s a pity the Herald stock photo isn’t the right species.

February 25, 2015

Measuring what you care about

If cannabis is safer than thought (as the Washington Post says), that might explain why the reporting is careful to stay away from thought.

thought

 

The problem with this new research is that it’s looking at the acute toxicity of drugs — how does the dose people usually take compare to the dose needed to kill you right away.  It’s hard to overstate how unimportant this is in debates over regulation of alcohol, tobacco, and cannabis.  There’s some concern about alcohol poisoning (in kids, mostly), but as far as I can remember I have literally never seen anti-tobacco campaigns mentioning acute nicotine poisoning as a risk, and even the looniest drug warriors don’t push fatal THC overdoses as the rationale for banning marijuana.

Alcohol is dangerous not primarily because of acute poisoning, but because of car crashes, violence, cancer, liver failure, and heart damage. Tobacco is dangerous not primarily because of acute poisoning, but because of lung cancer, COPD, heart disease, stroke, and other chronic diseases.

It’s hard to tell how dangerous marijuana is. It certainly causes dependence in some users, and there are reasons to think it might have psychological and neurological effects. If smoked, it probably damages the lungs. In all these cases, though, the data on frequency and severity of long-term effects are limited.  We really don’t know, and the researchers didn’t even try to estimate.

The conclusions of the researchers — that cannabis is over-regulated and over-panicked-about relative to other drugs — are reasonable, but the data provide very little support for them.  If the researchers had used the same methodology on caffeine, it would have looked much more dangerous than cannabis, and probably more dangerous than methamphetamine. That would have been a bit harder to sell, even with a pretty graph.

 

[story now in Herald, too]

February 19, 2015

London card clash sensitivity analysis

The data blog of the Daily Mirror reports a problem with ‘card clash’ on the London Underground.  You can now pay directly with a debit card instead of buying a ticket — so if you have both a transport card and a debit card in your wallet, you have the opportunity to enter with one and leave with the other and get overcharged. Alternatively, you can take the card out of your wallet and drop it.  Auckland Transport has a milder version of the same problem: no-touch credit cards can confuse the AT HOP reader and make it not recognise your card, but you won’t get overcharged unless you don’t notice the red light.

They looked at numbers of cards handed in at lost-and-found across the London Underground over the past two years (based on FOI request)

card-clash

If we’re going to spend time on this, we might also consider what the right comparison is. The data include cards on their own and cards with other stuff, such as a wallet. We shouldn’t combine them: the ‘card clash’ hypothesis would suggest a bigger increase in cards on their own.

Here’s a comparison using all the data: the pale points are the observations, the heavy lines are means.

allcards

Or, we might worry about trends over time and use just the most recent four months of comparison data:

recentcards

Or, use the same four months of the previous year:

matchedcards

 

In this case all the comparisons give basically the same conclusion: more cards are being handed in, but the increase is pretty similar for cards alone and for cards with other stuff, which weakens the support for the ‘card clash’ explanation.

Also, in the usual StatsChat spirit of considering absolute risks: there are 3.5 million trips per day, and about 55 cards handed in per day: one card for about 64000 trips. With two trips per day, 320 days per year, that would average once per person per century.

West Island census under threat?

From the Sydney Morning Herald

Asked directly whether the 2016 census would go ahead as planned on August 9, a spokeswoman for the parliamentary secretary to the treasurer Kelly O’Dwyer read from a prepared statement.

It said: “The government and the Bureau of Statistics are consulting with a wide range of stakeholders about the best methods to deliver high quality, accurate and timely information on the social and economic condition of Australian households.”

Asked whether that was an answer to the question: “Will the census go ahead next year?” the spokeswoman replied that it was.

Unlike Canada, it’s suggested they would at least save money in the short term. It’s the longer-term consequences of reduced information quality that are a concern — not just directly for Census questions, but for all surveys that use Census data to compensate for sampling bias. How bad this would be depends on what is used to replace the Census: if it’s a reasonably large mandatory-response survey (as in the USA), it could work well. If it’s primarily administrative data, probably not so much.

In New Zealand, the current view is that we do still need a census.

Key findings are that existing administrative data sources cannot at present act as a replacement for the current census, but that early results have been sufficiently promising that it is worth continuing investigations.

 

February 16, 2015

Pot and psychosis

The Herald has a headline “Quarter of psychosis cases linked to ‘skunk’ cannabis”, saying

People who smoke super-strength cannabis are three times more likely to develop psychosis than people who have never tried the drug – and five times more likely if they smoke it every day.

The relative risks are surprisingly large, but could be true; the “quarter” attributable fraction needs to be qualified substantially. As the abstract of the research paper (PDF) says, in the convenient ‘Interpretation’ section

Interpretation The ready availability of high potency cannabis in south London might have resulted in a greater proportion of first onset psychosis cases being attributed to cannabis use than in previous studies

Let’s unpack that a little.  The basic theory is that some modern cannabis is very high in THC and low in cannabidiol, and that this is more dangerous than more traditional pot. That is, the ‘skunk’ cannabis has a less extreme version of the same problem as the synthetic imitations now banned in NZ. 

The study compared people admitted as inpatients in a particular area of London (analogous to our DHBs) to people recruited by internet and train advertisements, and leaflets (which, of course, didn’t mention that the study was about cannabis). The control people weren’t all that well matched to the psychosis cases, but it wasn’t too bad.  The psychosis cases were somewhat more likely to smoke cannabis, and much more likely to smoke the high-THC type. In fact, smoking of other cannabis wasn’t much different between cases and controls.

That’s where the relative risks of 3 and 5 come from.  It’s still possible that these are due at least in part to some other factor; you can’t tell from just this sort of data. The atttributable fraction (a quarter of cases) comes from combining the relative risk with the proportion of the population who are exposed.

Suppose ‘skunk-type’ cannabis triples your risk, and 20% of people in the population use it, as was seen for controls in the sample. General UK data (eg) suggest the rate in non-users might be 5 cases per 10,000 people per year. So, in 100,000 people, 80,000 would be non-users and you’d expect 40 cases per year. The other 20,000 would be users, and you’d expect a background rate of 10 cases plus 20 extra cases caused by the cannabis. So, in the 100,000 people, you’d get 70 cases per year, 50 of which would have happened anyway and 20 due to cannabis. That’s not exactly the calculation the researchers did — they used a trick where they don’t need the background rate as long as it’s low, and I rounded more — but it’s basically the same. I get 28%; they got 24%.

The figures illustrate two things. First, the absolute risk increase is roughly 20 cases per 100,000 20,000 people per year. Second, the ‘quarter’ estimate is very sensitive to the proportion exposed. If 5% of people used ‘skunk-type’ cannabis, you can run the numbers again and you get 5 cases due to cannabis out of 55 in 100,000 people: only 9% of cases due to exposure.

Now we’re at the ‘interpretation’ quote from the research paper.  In this South London area, 20% of people have used mostly the high-potency cannabis and 44% mostly have used other types, with 37% non-users. That’s a lot of pot.  Even if the relative risks are correct, the population attributable proportion will be much lower for the UK as a whole (or for NZ as a whole).

Still, the research does tend to support the idea of regulated legalisation, the sort of thing that Mark Kleiman advocates, where limits on THC and/or higher taxes for higher concentrations can be used to push cannabis supply to lower-risk varieties.

 

February 12, 2015

Eat food

From the Herald, based on this paper

Dietary advice issued to tens of millions had warned that fat consumption should be strictly limited to cut the risk of heart disease and death.

But experts say the recommendations, which have been followed for the past 30 years, were not backed up by scientific evidence and should not have been issued.

Firstly, the “not  backed up by scientific evidence” actually means “not backed up by randomised trials”. When there’s a shortage of randomised trials on a topic it doesn’t mean there is no evidence. Randomised trials are ideal, but they are very hard to do usefully for effects of diet.  The same issue of the scientific journal has a useful commentary piece talking about the evidence and policy questions.

Second,  it’s true that there were real gaps in knowledge on the difference between types of fat back then. All fat isn’t the same, and neither is all saturated fat, or all polyunsaturated fat. Since I wasn’t in epidemiology back then, I don’t know how much this was a known unknown that should have led to more caution versus an unknown unknown.

Third, in the US at least, people didn’t really reduce their fat consumption as a result of the guidelines. For example, in a paper in the American Journal of Clinical Nutrition

In a comparison of NHANES 2005–2006 with NHANES I, men had a decreased absolute daily fat intake (by 20 ± 23 kcal, from 909 to 889 kcal), whereas women had an increased absolute daily fat intake (by 27 ± 14 kcal, from 577 to 605 kcal).

Fat intake as a proportion of calories decreased quite a lot, because calories went up, but absolute fat intake stayed fairly stable. Saying the recommendations ‘have been followed for the past 30 years’ is misleading.

Fourth, as this shows we don’t know a lot about how to make recommendations that translate to the right sort of behaviour changes. This is another area where there’s shortage of randomised trials. And of scientific evidence generally.

And finally, there was a good story by Martin Johnston in the Herald in December that gives more background on the issue. There’s genuine disagreement, but the establishment view isn’t what the caricatures suggest:

Professor Jackson reckons the Japanese and traditional Mediterranean diets offer insights. He says the balance of carbs and fats is probably unimportant as long as most fat is not saturated and most carb is the complex variety, not sugar and white flour-based refined carbs.

 

January 29, 2015

Absolute risk/benefit calculators

An interesting interactive calculator for heart disease/stroke risk, from the University of Nottingham. It lets you put in basic, unchangeable factors (age,race,sex), modifiable factors (smoking, diabetes, blood pressure, cholesterol), and then one of a set of interventions

Here’s the risk for an imaginary unhealthy 50-year old taking blood pressure medications

bp

The faces at the right indicate 10-year risk: without the unhealthy risk factors, if you had 100 people like this, one would have a heart attack, stroke, or heart disease death over ten years, with the risk factors and treatment four  would have an event (the pink and red faces).  The treatment would prevent five events in 100 people, represented by the five green faces.

There’s a long list of possible treatments in the middle of the page, with the distinctive feature that most of them don’t appear to reduce risk, from the best evidence available. For example, you might ask what this guy’s risk would be if he took vitamin and fish oil supplements. Based on the best available evidence, it would look like this:

vitamin

 

The main limitation of the app is that it can’t handle more than one treatment at a time: you can’t look at blood pressure meds and vitamins, just at one or the other.

(via @vincristine)

January 27, 2015

Benadryl and Alzheimers

I expected the Herald story “Hay fever pills linked to Alzheimer’s risk – study” to be the usual thing, where a fishing expedition found a marginal correlation in low-quality data.  It isn’t.

The first thing I noticed  when I found the original article is that I know several of the researchers. On the one hand that’s a potential for bias, on the other hand, I know they are both sensible and statistically knowledgeable. The study has good quality data: the participants are all in one of the Washington HMOs, and there is complete information on what gets prescribed for them and whether they fill the prescriptions.

One of the problems with drug:disease associations is confounding by indication. As Samuel Goldwyn observed, “Any man who goes to a psychiatrist needs to have his head examined”, and more generally the fact that medicine is given to sick people tends to make it look bad.  In this case, however, the common factor between the medications being studied is an undesirable side-effect for most of them, unrelated to the reason they are prescribed.  In addition to reducing depression or preventing allergic reactions, these drugs also block part of the effect of the neurotransmitter acetylcholine. The association remained just as strong when recent drug use was excluded, or when antidepressant drugs were excluded, so it probably isn’t that early symptoms of Alzheimer’s lead to treatment.

The association replicates results found previously, and is quite strong, about four times the standard error (“4σ”) or twice the ‘margin of error’. It’s not ridiculously large, but is enough to be potentially important: a relative rate of about 1.5.

It’s still entirely possible that the association is due to some other factor, but the possibility of a real effect isn’t completely negligible. Fortunately, many of the medications involved are largely obsolete: modern hayfever drugs (such as fexofenadine, ‘Telfast’) don’t have anticholinergic activities, and nor do the SSRI antidepressants. The exceptions are tricyclic antidepressants used for chronic pain (where it’s probably worth the risk) and the antihistamines used as non-prescription sleep aids.

January 9, 2015

The Internet of things and its discontents

The current Consumer Electronics Show is full of even more gadgets that talk to each other about you. This isn’t necessarily an unmixed blessing

From the New Yorker

To find out, the scientists recruited more than five hundred British adults and asked them to imagine living in a house with three roommates. This hypothetical house came equipped with an energy monitor, and all four residents had agreed to pay equally for power. One half of the participants was told that energy use in the house had remained constant from one month to the next, and that each roommate had consumed about the same amount. The other half was told that the bill had spiked because of one free-riding, electricity-guzzling roommate.

From Buzzfeed

It’s not difficult to imagine a future in which similar data sets are wielded by employers, the government, or law enforcement. Instead of liberating the self through data, these devices could only further restrain and contain it. As Walter De Brouwer, co-founder of the health tracker Scanadu, explained to me, “The great thing about being made of data is thatdata can change.” But for whom — or what — are such changes valuable?

and the slightly chilling quote “it’s not surveillance, after all, if you’re volunteering for it”
Both these links come from Alex Harrowell at the Yorkshire Ranter, whose comment on smart electricity meters is

The lesson here is both that insulation and keeping up to the planning code really will help your energy problem, rather than just provide a better class of blame, and rockwool doesn’t talk.