Improving the imperfect science of earthquake prediction

Earthquake prediction is a notoriously risky – yet simultaneously vital – pursuit. Models for estimating when, where and how hard a quake might hit rest on two theoretical pillars: the Omori and Gutenberg-Richter power laws.

The first has applications well beyond seismology – in predicting finance market movement markets, for instance – while the second is earthquake-specific. 

Gutenberg-Richter holds that a main earthquake will be followed by a predictable number of aftershocks at knowable values. If the main quake measures four on the Richter scale, for instance, it will be followed by 10 aftershocks at three and 100 at two.

Omori’s power law takes a different route, but is still essentially concerned with forecasting the decay rate of secondary events following a major primary one.

Both power laws are moderately successful in earthquake situations where a powerful first quake is followed, sometimes for months, by lesser aftershocks. However, in recent years there have been atypical earthquake clusters where the fundamental pattern of ever-decreasing impacts hasn’t manifested. Instead, initial quakes have been followed by lesser aftershocks, but then by a second and even multiple major quakes, sometimes more powerful than the first one.

In these circumstances both power laws break down completely.

In April 2016, for instance, a big quake hit the Kumamoto region of Japan. The local meteorological agency issued aftershock warnings based on accepted prediction models, only to be surprised two days later when an even bigger quake struck.

The authorities had no choice but to suspend issuing advice, because the tools they were using evidently could not forecast in such circumstances.

Something similar, but even more complex, happened in Italy later in 2016, when a large quake hit the city of Amatrice, killing hundreds. In line with power law predictions, there were many smaller quakes over the next two months, but then another major quake struck, followed a couple of months later by four more.

All up, the Amatrice-Norcia seismic sequence, as it became known, involved over 50,000 quakes of varying intensity and laid bare the urgent need for a new forecasting model.

{%recommended 630%}

Italy’s Seismic Hazard Centre at Istituto Nazionale di Geofisica e Vulcanologia (INGV) was already on the case, and was rolling out a model known as Operational Earthquake Forecasting system (OEF). The system utilises three new predictive models.

Two of these are variations of an algorithm known as the Epidemic Aftershock Sequence, and the other is called the Short-Term Earthquake Probability model. The first two operate on the assumption that initial quakes can generate not only secondary smaller shocks but additional major ones. The third disposes of the entire classification system and plots every quake, big or small, with the same statistical values.

Both models are updated either weekly, or after any new quake above a certain magnitude.

While the new OEF system certainly sounds like an improvement, it is still in its pilot stages and its statistical depth and predictive versatility remain matters of assumption only.

To tackle that problem, a trio of seismologists led by Warner Marzocchi of the Istituto Nazionale di Geofisica e Vulcanologia in Rome fed the data from the Amatrice-Norcia sequence into the model, updating it for each reported event.

The results, they write in the journal Science Advances, “show good agreement between spatial forecasts and locations of the target earthquakes”.

That’s promising, but the researchers are quick to point out that OEF is still in its “nascent stage”, and that its bedrock data is being slowly accumulated “brick by brick”.

Nevertheless, the results are encouraging. Given enough time, write Marzocchi and colleagues, the new model “may eventually pave the way to a ‘quiet revolution’ in earthquake forecasting.”

Please login to favourite this article.