February 27, 2015

Quake prediction: how good does it need to be?

From a detailed story in the ChCh Press, (via Eric Crampton) about various earthquake-prediction approaches

About 40 minutes before the quake began, the TEC in the ionosphere rose by about 8 per cent above expected levels. Somewhat perplexed, he looked back at the trend for other recent giant quakes, including the February 2010 magnitude 8.8 event in Chile and the December 2004 magnitude 9.1 quake in Sumatra. He found the same increase about the same time before the quakes occurred.

Heki says there has been considerable academic debate both supporting and opposing his research.

To have 40 minutes warning of a massive quake would be very useful indeed and could help save many lives. “So, why 40 minutes?” he says. “I just don’t know.”

He says if the link were to be proved more firmly in the future it could be a useful warning tool. However, there are drawbacks in that the correlation only appears to exist for the largest earthquakes, whereas big quakes of less than magnitude 8.0 are far more frequent and still cause death and devastation. Geomagnetic storms can also render the system impotent, with fluctuations in the total electron count masking any pre-quake signal.

Let’s suppose that with more research everything works out, and there is a rise in this TEC before all very large quakes. How much would this help in New Zealand? The obvious place is Wellington. A quake over 8.0 magnitude has been observed in the area in 1855, when it triggered a tsunami. A repeat would also shatter many of the earthquake-prone buildings. A 40-minute warning could save many lives. It appears that TEC shouldn’t be that expensive to measure: it’s based on observing the time delays in GPS satellite transmissions as they pass through the ionosphere, so it mostly needs a very accurate clock (in fact, NASA publishes TEC maps every five minutes). Also, it looks like it would be very hard to hack the ionosphere to force the alarm to go off. The real problem is accuracy.

The system will have false positives and false negatives. False negatives (missing a quake) aren’t too bad, since that’s where you are without the system. False positives are more of a problem. They come in two forms: when the alarm goes off completely in the absence of a quake, and when there is a quake but no tsunami or catastrophic damage.

Complete false predictions would need to be very rare. If you tell everyone to run for the hills and it turns out to be sunspots or the wrong kind of snow, they will not be happy: the cost in lost work (and theft?) would be substantial, and there would probably be injuries.  Partial false predictions, where there was a large quake but it was too far away or in the wrong direction to cause a tsunami, would be just as expensive but probably wouldn’t cause as much ill-feeling or skepticism about future warnings.

Now for the disappointment. The story says “there has been considerable academic debate”. There has. For example, in a (paywalled) paper from 2013 looking at the Japanese quake that prompted Heki’s idea

A detailed analysis of the ionospheric variability in the 3 days before the earthquake is then undertaken, where a simultaneous increase in foF2 and the Es layer peak plasma frequency, foEs, relative to the 30-day median was observed within 1 h before the earthquake. A statistical search for similar simultaneous foF2 and foEs increases in 6 years of data revealed that this feature has been observed on many other occasions without related seismic activity. Therefore, it is concluded that one cannot confidently use this type of ionospheric perturbation to predict an impending earthquake.

In translation: you need to look just right to see this anomaly, and there are often anomalies like this one without quakes. Over four years they saw 24 anomalies, only one shortly before a quake.  Six complete false positives per year is obviously too many.  Suppose future research could refine what the signal looks like and reduce the false positives by a factor of ten: that’s still evacuation alarms with no quake more than once every two years. I’m pretty sure that’s still too many.

 

avatar

Thomas Lumley (@tslumley) is Professor of Biostatistics at the University of Auckland. His research interests include semiparametric models, survey sampling, statistical computing, foundations of statistics, and whatever methodological problems his medical collaborators come up with. He also blogs at Biased and Inefficient See all posts by Thomas Lumley »

Comments

  • avatar
    Thomas Lumley

    In twitter discussions an ambiguity became clear. I was guessing that the rate of false positives per unit time (ie, 2/year) would be generalisable to Wellington, which gives a useless alarm.

    Logically, it could be that the positive predictive value (1 in 24) generalised to Wellington, which might be ok.

    Which one is more likely depends on whether the false positive signals are real signals of something that isn’t a quake but is sort of quaky, or whether they are noise or signals of some unrelated process. The paper I linked to seems to be working on the principle that this is just something the ionosphere does from time to time, my first possibility.

    9 years ago

  • avatar
    Joseph Delaney

    I wonder if the lower false positive rate (1 per 2 years) would be okay if the reaction was pretty minor. For example, Seattle is willing to close bridges for 1/2 on major arteries with little notice. Having that happen once every couple of years might be okay.

    Similarly, a fire station drill (essentially at a random time) every 2 years might actually be kind of useful.

    If you start evacuating buildings, then it seems way too often (but then I have the same opinion about fire drills). I guess what it really depends on is what the intervention considered is. Even just putting emergency services on high alert might have some value.

    Still, getting to E[# false drills]=0.5/year might be okay if measures were limited and it counted as the earthquake drill practice.

    9 years ago

    • avatar
      Thomas Lumley

      The tsunami measures for a >8.0 quake would include getting everyone out of the central business/govt/shopping area of Wellington — comparable to evacuating the former ride-free area in Seattle. That’s what I think is out of the question at moderate false positive rates.

      It’s a good point that at 0.5/year you might be able to do other stuff: perhaps close the tunnels, hold or divert flights, and get ships somewhere safer (especially the ferries, since they’ll be needed afterwards)

      9 years ago