The transdimensional Bayesian approach handles GPS data limitations better than existing methods and may assist future seismic hazard assessment studies.
The b-value, which describes the fraction of large versus small earthquakes, is less sensitive to transient changes in detection threshold and may improve the detection of precursory changes.
Spatial clustering of aftershocks explains why simple statistical models often outperform complex physics‐based earthquake forecasting models even if the physical mechanisms are correctly modeled.
Earthquake hazard calculations for California’s coast are refined with a view of precariously balanced rocks that would have fallen if the largest predicted shaking happened in the past 20,000 years.
Advanced computing technology can be used to forecast ground shaking from earthquakes and provide an early warning in real time.
A theoretical study explores why small earthquake sources can produce quasiperiodic sequences of identical events, whereas earthquakes on large faults are intrinsically more variable.
A unique set of high-frequency groundwater level monitoring reveals a loss of approximately ten million cubic meters of groundwater after a major earthquake.
By reanalyzing seismic records, researchers found a plethora of tiny earthquakes in Southern California that trace new fault structures and reveal how earthquakes are triggered.
The results of a novel analysis of aftershock size distribution have important implications for more realistically assessing the seismic hazard of earthquake sequences.
Mandated wastewater injection reductions in effect since 2016 are inadequate for preventing future, large-magnitude earthquakes in the state, according to a new induced seismicity model.