Biases in estimating b-values from small earthquake catalogues: how high are high b-values?

SUMMARY The Gutenberg–Richter (GR) b-value describes the relative proportion of small to large earthquakes in a scale-free population and is a critical parameter for probabilistic estimation of seismic hazard. At low magnitudes, the scale-free behaviour breaks down below the magnitude of completenes...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Geophysical journal international 2022-02, Vol.229 (3), p.1840-1855
Hauptverfasser: Geffers, G-M, Main, I G, Naylor, M
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:SUMMARY The Gutenberg–Richter (GR) b-value describes the relative proportion of small to large earthquakes in a scale-free population and is a critical parameter for probabilistic estimation of seismic hazard. At low magnitudes, the scale-free behaviour breaks down below the magnitude of completeness mc due to censoring of the data, when the instrumentation used to construct the catalogue is incapable of completely recording all earthquakes in the study region above the background noise. At high magnitudes, it must also break down because natural tectonic and volcanic processes are incapable of an infinite release of energy. This breakdown at large magnitudes is commonly modelled as an exponential roll-off to either the incremental or cumulative GR distribution. This introduces an extra parameter and hence requires relatively more data to justify the additional model complexity. For tectonic seismicity, the estimated b-value is commonly close to unity. In contrast, studies of volcanic and induced seismicity often report significantly higher estimates of the b-value, albeit using relatively small data sets—both in sample size and dynamic (magnitude) range for data above mc. Here, using synthetic data, we show that when we have low dynamic range, it is statistically challenging to test whether the sample is representative of the scale-free GR behaviour or whether it is controlled primarily by the finite size roll-off. We then explore the potential biases that arise when the data quality does not allow this distinction to be made and what the implications are for interpreting studies that have high estimated b-values. We find that systematically higher b-values than those used to generate the synthetic data are regularly obtained when assuming the wrong model and when having a too high mc, resulting in too small catalogues. This is important because it changes our understanding of the accuracy of elevated or variable b-values in catalogues of different dynamic ranges, and quantifies the likely bias in the inferred b-value compared to the underlying true distribution and its associated uncertainty. Finally, we recommend steps to minimize this bias.
ISSN:0956-540X
1365-246X
DOI:10.1093/gji/ggac028