Computer Science Artificial Intelligence

Seismology and Earthquake Studies

Description

This cluster of papers focuses on the application of machine learning and deep learning techniques to improve the accuracy and timeliness of earthquake early warning systems. It covers topics such as seismic signal classification, real-time seismology, convolutional neural networks for seismic phase picking, and the integration of citizen science in earthquake monitoring.

Keywords

Machine Learning; Seismic Signals; Earthquake Detection; Early Warning; Convolutional Neural Network; Seismic Event Classification; Real-Time Seismology; Deep Learning Models; Seismic Phase Picking; Citizen Science

Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. … Hypoinverse is a computer program that processes files of seismic station data for an earthquake (like p wave arrival times and seismogram amplitudes and durations) into earthquake locations and magnitudes. It is one of a long line of similar USGS programs including HYPOLAYR (Eaton, 1969), HYPO71 (Lee and Lahr, 1972), and HYPOELLIPSE (Lahr, 1980). If you are new to Hypoinverse, you may want to start by glancing at the section “SOME SIMPLE COMMAND SEQUENCES” to get a feel of some simpler sessions. This document is essentially an advanced user’s guide, and reading it sequentially will probably plow the reader into more detail than he/she needs. Every user must have a crust model, station list and phase data input files, and glancing at these sections is a good place to begin. The program has many options because it has grown over the years to meet the needs of one the largest seismic networks in the world, but small networks with just a few stations do use the program and can ignore most of the options and commands. History and availability. Hypoinverse was originally written for the Eclipse minicomputer in 1978 (Klein, 1978). A revised version for VAX and Pro-350 computers (Klein, 1985) was later expanded to include multiple crustal models and other capabilities (Klein, 1989). This current report documents the expanded Y2000 version and it supercedes the earlier documents. It serves as a detailed user's guide to the current version running on unix and VAX-alpha computers, and to the version supplied with the Earthworm earthquake digitizing system. Fortran-77 source code (Sun and VAX compatible) and copies of this documentation is available via anonymous ftp from computers in Menlo Park. At present, the computer is swave.wr.usgs.gov and the directory is /ftp/pub/outgoing/klein/hyp2000. If you are running Hypoinverse on one of the Menlo Park EHZ or NCSN unix computers, the executable currently is ~klein/hyp2000/hyp2000. New features. The Y2000 version of Hypoinverse includes all of the previous capabilities, but adds Y2000 formats to those defined earlier. In most cases, the new formats add 2 digits to the year field to accommodate the century. Other fields are sometimes rearranged or expanded to accommodate a better field order. The Y2000 formats are invoked with the “200” command. When the Y2000 flag is turned on, all files are read and written in the new format and there is no mixing of format types in a single run. Some formats without a date field, like station files, have not changed. A separate program called 2000CONV has been written to convert old formats to new. Other new features, like expanded station names, calculating amplitude magnitudes from a variety of digital seismometers, station history files, interactive earthquake processing, and locations from CUSP (Caltech USGS Seismic Processing) binary files have been added. General features. Hypoinverse will locate any number of events in an input file, which can be in one of several different formats. Any or all of printout, summary or archive output may be produced. Hypoinverse is driven by user commands. The various commands define input and output files, set adjustable parameters, and solve for locations of a file of earthquake data using the parameters and files currently set. It is both interactive and "batch" in that commands may be executed either from the keyboard or from a file. You execute the commands in a file by typing @filename at the Hypoinverse prompt. Users may either supply parameters on the command line, or omit them and are prompted interactively. The current parameter values are displayed and may be taken as defaults by pressing just the RETURN key after the prompt. This makes the program very easy to use, providing you can remember the names of the commands. Combining commands with and without their required parameters into a command file permits a variety of customized procedures such as automatic input of crustal model and station data, but prompting for a different phase file each time. All commands are 3 letters long and most require one or more parameters or file names. If they appear on a line with a command, character strings such as filenames must be enclosed in apostrophes (single quotes). Appendix 1 gives this and other free-format rules for supplying parameters, which are parsed in Fortran. When several parameters are required following a command, any of them may be omitted by replacing them with null fields (see appendix 1). A null field leaves that parameter unchanged from its current or default value. When you start HYPOINVERSE, default values are in effect for all parameters except file names. Hypoinverse is a complicated program with many features and options. Many of these "advanced" or seldom used features are documented here, but are more detailed than a typical user needs to read about when first starting with the program. I have put some of this material in smaller type so that a first time user can concentrate on the more important information.
First posted April 1, 2008 Revised May 3, 2008 For additional information, contact: Natural HazardsU.S. Geological Survey12201 Sunrise Valley DriveReston, VA 20192 The 2008 U.S. Geological Survey (USGS) National Seismic … First posted April 1, 2008 Revised May 3, 2008 For additional information, contact: Natural HazardsU.S. Geological Survey12201 Sunrise Valley DriveReston, VA 20192 The 2008 U.S. Geological Survey (USGS) National Seismic Hazard Maps display earthquake ground motions for various probability levels across the United States and are applied in seismic provisions of building codes, insurance rate structures, risk assessments, and other public policy. This update of the maps incorporates new findings on earthquake ground shaking, faults, seismicity, and geodesy. The resulting maps are derived from seismic hazard curves calculated on a grid of sites across the United States that describe the frequency of exceeding a set of ground motions. The USGS National Seismic Hazard Mapping Project developed these maps by incorporating information on potential earthquakes and associated ground shaking obtained from interaction in science and engineering workshops involving hundreds of participants, review by several science organizations and State surveys, and advice from two expert panels. The new probabilistic hazard maps represent an update of the 2002 seismic hazard maps developed by Frankel and others (2002), which used the methodology developed for the 1996 version of the maps (Frankel and others, 1996). Algermissen and Perkins (1976) published the first probabilistic seismic hazard map of the United States which was updated in Algermissen and others (1990). The National Seismic Hazard Maps represent our assessment of the “best available science” in earthquake hazards estimation for the United States (maps of Alaska and Hawaii as well as further information on hazard across the United States are available on our Web site at http://earthquake.usgs.gov/research/hazmaps/).
This third edition provides a concise yet approachable introduction to seismic theory, designed as a first course for graduate students or advanced undergraduate students. It clearly explains the fundamental concepts, … This third edition provides a concise yet approachable introduction to seismic theory, designed as a first course for graduate students or advanced undergraduate students. It clearly explains the fundamental concepts, emphasizing intuitive understanding over lengthy derivations, and outlines the different types of seismic waves and how they can be used to resolve Earth structure and understand earthquakes. New material and updates have been added throughout, including ambient noise methods, shear-wave splitting, back-projection, migration and velocity analysis in reflection seismology, earthquake rupture directivity, and fault weakening mechanisms. A wealth of both reworked and new examples, review questions and computer-based exercises in MATLAB®/Python give students the opportunity to apply the techniques they have learned to compute results of interest and to illustrate Earth's seismic properties. More advanced sections, which are not needed to understand the other material, are flagged so that instructors or students pressed for time can skip them.
REAL LATtLON»LAT2»LON2»LATFP»LQN£PfMAG»LATR»LQNR COMMON /A3/ NRES(2.151)>NXM(151).NFM(151)«SW(?tl5l)tSRSQ(2»151)t SRWT(2»151)fSXM(lSl)fSXMSO(lSl)»SFM(151)tSFMSQUSl)»QNO<4) COMMON /AS/ ZTR»XNEAR»XFAR«POS»IQ»KMSfKFM»IPUNtIMAG»IRtOSPA(9»40) COMMON /A6/ NMAX.LMAX»NS.NL»MMAXfNR»FNO»Z»X(4f101)»ZSOtNRP»DF(101) COMMON /A7/ KP»KZ»KOUT,wlTU01>tY(4)»SE(4)tXMEAN(4)tCP(180).SPU80) COMMON /Afl/ CAL(lOl).XMAG(lOl)»FMAG(101)»NM»AVXMtSDXM»NFtAVFM, SOFM»MAGfKDX(ini)tAMX(lOl)fP«X(10D »CALX(101)tFMP(iOl) COMMON /A12/ MSTA(lOl) »PRMK<101) »W(101) tJMINUOD .P(lOl) t KMK(lOl),WRK(101),TP(101) .OTU01),COSL(701) … REAL LATtLON»LAT2»LON2»LATFP»LQN£PfMAG»LATR»LQNR COMMON /A3/ NRES(2.151)>NXM(151).NFM(151)«SW(?tl5l)tSRSQ(2»151)t SRWT(2»151)fSXM(lSl)fSXMSO(lSl)»SFM(151)tSFMSQUSl)»QNO<4) COMMON /AS/ ZTR»XNEAR»XFAR«POS»IQ»KMSfKFM»IPUNtIMAG»IRtOSPA(9»40) COMMON /A6/ NMAX.LMAX»NS.NL»MMAXfNR»FNO»Z»X(4f101)»ZSOtNRP»DF(101) COMMON /A7/ KP»KZ»KOUT,wlTU01>tY(4)»SE(4)tXMEAN(4)tCP(180).SPU80) COMMON /Afl/ CAL(lOl).XMAG(lOl)»FMAG(101)»NM»AVXMtSDXM»NFtAVFM, SOFM»MAGfKDX(ini)tAMX(lOl)fP«X(10D »CALX(101)tFMP(iOl) COMMON /A12/ MSTA(lOl) »PRMK<101) »W(101) tJMINUOD .P(lOl) t KMK(lOl),WRK(101),TP(101) .OTU01),COSL(701) COMMON /A14/ MRKfMDOL»HLANK»MSTAR»DOTtSTAR4iQUES»CRMK»MCENT,ISTAR COMMON /A15/ MtL»J»ORG»JAV»PMIN»AZRES(101
abstract A laboratory and a numerical model have been constructed to explore the role of friction along a fault as a factor in the earthquake mechanism. The laboratory model demonstrates … abstract A laboratory and a numerical model have been constructed to explore the role of friction along a fault as a factor in the earthquake mechanism. The laboratory model demonstrates that small shocks are necessary to the loading of potential energy into the focal structure; a large part, but not all, of the stored potential energy is later released in a major shock, at the end of a period of loading energy into the system. By the introduction of viscosity into the numerical model, aftershocks take place following a major shock. Both models have features which describe the statistics of shocks in the main sequence, the statistics of aftershocks and the energy-magnitude scale, among others.
Abstract The seismological observation system in China achieved rapid development during the last five years. The Data Management Center (DMC) of the China Earthquake Network Center (CENC) of the China … Abstract The seismological observation system in China achieved rapid development during the last five years. The Data Management Center (DMC) of the China Earthquake Network Center (CENC) of the China Earthquake Administration (CEA) now receives and archives waveform data from more than 1000 permanent seismic stations around China in real time. For operational, backup, and data security considerations, the DMC at the Institute of Geophysics (IGP), the CEA was established at the end of 2007. The IGPDMC is capable of receiving and processing continuous waveform data in real time from more than 1000 permanent seismic stations around China. Currently, the data processing and management mainly include data quality control, data format conversion, event data extraction at user’s request, and data download service via Internet. By now, the IGPDMC has supplied about 150 terabytes waveform data to over 120 researches of more than 30 academic institutions. More than 20 papers have been published in professional journals. After the great Wenchuan earthquake, the real-time waveform data from 56 portable stations deployed in the aftershock area were added to IGPDMC. All these data make the IGPDMC a critical platform for supporting relevant seismological research. This paper gives a detailed description of the permanent seismic stations of the CEA’s seismological observation system, the technical system construction of the IGPDMC, establishment of the near-real-time automatic event-extraction system for large earthquakes, as well as the prompt data support to Wenchuan earthquake-related researches.
Summary The various mechanisms which could cause oblique slip faulting are briefly reviewed. It is thought that such faulting may frequently arise from the existence of preferred planes of fracture … Summary The various mechanisms which could cause oblique slip faulting are briefly reviewed. It is thought that such faulting may frequently arise from the existence of preferred planes of fracture within the rocks. The dynamics of this mechanism is studied in some detail and an expression is obtained for the first direction of slip within the plane under the influence of a general stress system of given orientation it is found that the initial slip may occur in any possible direction within the plane, the direction depending on the relative values of the three principle pressures. The theory suggests that when a pre-existing fault is subjected to a reorientated stress system (typical or rotated) the movement after fracture will usually be oblique. In conclusion, the general implications of the theory are discussed.
Preface. Acknowledgments. 1 Introduction. 2 Basic Seismological Theory. 3 Seismology and Earth Structure. 4 Earthquakes. 5 Seismology and Plate Tectonics. 6 Seismograms as Signals. 7 Inverse Problems. Appendix: Mathematical and … Preface. Acknowledgments. 1 Introduction. 2 Basic Seismological Theory. 3 Seismology and Earth Structure. 4 Earthquakes. 5 Seismology and Plate Tectonics. 6 Seismograms as Signals. 7 Inverse Problems. Appendix: Mathematical and Computational Background. Reference. Solutions to selected odd-numbered problems. Index.
S. Stein and M. Wysession's An Introduction to Seismology, Earthquakes, and Earth Structure is the textbook I've been waiting for. It combines the pedagogical strengths of Introduction to Seismology by … S. Stein and M. Wysession's An Introduction to Seismology, Earthquakes, and Earth Structure is the textbook I've been waiting for. It combines the pedagogical strengths of Introduction to Seismology by P. Shearer (1999) and the breadth of coverage of Modern Global Seismology by T. Lay and T. Wallace (1995). The “price” of this combination is a rather lengthy text, but it is so well written that the length can be easily forgiven. At first glance, An Introduction to Seismology Earthquakes, and Earth Structure appears to follow a very traditional path, beginning with a nice overview chapter on the relevance of seismology, followed by chapters on seismic waves that include stress and strain basics, Earth structure, earthquake sources, and seismology and plate tectonics. On closer inspection, though, the reader will find many chapter sections that are rather novel. Included among these are waves on a string, an excellent way to introduce students to the complexity of seismic waves; earthquake geodesy which establishes the important connection between seismology and deformation studies; and plate kinematics, a clearly presented “short‐course” on plate motion studies. These sections, and others like them, significantly enhance the text. The latter two in particular help give the book a broader perspective that is rare in standard seismology texts.
Earthquake mitigation efforts in the United States currently use long-term probabilistic hazard assessments and rapid post-earthquake notification to reduce the potential damage of earthquakes. Here we present the seismological design … Earthquake mitigation efforts in the United States currently use long-term probabilistic hazard assessments and rapid post-earthquake notification to reduce the potential damage of earthquakes. Here we present the seismological design for and demonstrate the feasibility of a short-term hazard warning system. Using data from past earthquakes, we show that our Earthquake Alarm System (ElarmS) could, with current TriNet instrumentation, issue a warning a few to tens of seconds ahead of damaging ground motion. The system uses the frequency content of the P-wave arrival to determine earthquake magnitude, an approach that allows magnitude determination before any damaging ground motion occurs.
A method was devised to extract useful information about the earthquake source from the coda of local small earthquakes. The method is based on the assumption that the power spectrum … A method was devised to extract useful information about the earthquake source from the coda of local small earthquakes. The method is based on the assumption that the power spectrum of coda waves of a local earthquake is only a function of time measured from the earthquake origin time and independent of distance and details of wave path to the station. Evidence supporting this assumption is presented, using the data on aftershocks of the Parkfield earthquakes of June 28, 1966. A simple statistical model of the wave medium that accounts for the observations on the coda is proposed. By applying the method to many Parkfield aftershocks, the relation between the seismic moment M0 and local magnitude ML is determined as log M0 (dyne cm) = 15.8 + 1.5ML. The size of a microearthquake with magnitude zero is estimated as 10×10 meters.
Research Article| May 01, 2010 ObsPy: A Python Toolbox for Seismology Moritz Beyreuther; Moritz Beyreuther Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] Research Article| May 01, 2010 ObsPy: A Python Toolbox for Seismology Moritz Beyreuther; Moritz Beyreuther Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Search for other works by this author on: GSW Google Scholar Robert Barsch; Robert Barsch Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Search for other works by this author on: GSW Google Scholar Lion Krischer; Lion Krischer Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Search for other works by this author on: GSW Google Scholar Tobias Megies; Tobias Megies Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Search for other works by this author on: GSW Google Scholar Yannik Behr; Yannik Behr Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) 2School of Geography, Environment, and Earth Sciences, Victoria University of Wellington, New Zealand Search for other works by this author on: GSW Google Scholar Joachim Wassermann Joachim Wassermann Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Search for other works by this author on: GSW Google Scholar Author and Article Information Moritz Beyreuther 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) Robert Barsch 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) Lion Krischer 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) Tobias Megies 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) Yannik Behr 2School of Geography, Environment, and Earth Sciences, Victoria University of Wellington, New Zealand Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) Joachim Wassermann 1Department of Earth and Environmental Sciences, Geophysical Observatory, Ludwig Maximilians Universität München, Germany Department of Earth and Environmental Sciences Geophysical Observatory Ludwig Maximilians Universität München Munich, Germany [email protected] (M. B.) Publisher: Seismological Society of America First Online: 09 Mar 2017 Online ISSN: 1938-2057 Print ISSN: 0895-0695 © 2010 by the Seismological Society of America Seismological Research Letters (2010) 81 (3): 530–533. https://doi.org/10.1785/gssrl.81.3.530 Article history First Online: 09 Mar 2017 Cite View This Citation Add to Citation Manager Share Icon Share Facebook Twitter LinkedIn Email Permissions Search Site Citation Moritz Beyreuther, Robert Barsch, Lion Krischer, Tobias Megies, Yannik Behr, Joachim Wassermann; ObsPy: A Python Toolbox for Seismology. Seismological Research Letters 2010;; 81 (3): 530–533. doi: https://doi.org/10.1785/gssrl.81.3.530 Download citation file: Ris (Zotero) Refmanager EasyBib Bookends Mendeley Papers EndNote RefWorks BibTex toolbar search Search Dropdown Menu toolbar search search input Search input auto suggest filter your search All ContentBy SocietySeismological Research Letters Search Advanced Search The wide variety of computer platforms, file formats, and methods to access seismological data often requires considerable effort in preprocessing such data. Although preprocessing work-flows are mostly very similar, few software standards exist to accomplish this task. The objective of ObsPy is to provide a Python toolbox that simplifies the usage of Python programming for seismologists. It is conceptually similar to SEATREE (Milner and Thorsten 2009) or the exploration seismic software project MADAGASCAR (http://www.reproducibility.org). In ObsPy the following essential seismological processing routines are implemented and ready to use: reading and writing data only SEED/MiniSEED and Dataless... You do not have access to this content, please speak to your institutional administrator if you feel you should have access.
Abstract Digital algorithms for robust detection of phase arrivals in the presence of stationary and nonstationary noise have a long history in seismology and have been exploited primarily to reduce … Abstract Digital algorithms for robust detection of phase arrivals in the presence of stationary and nonstationary noise have a long history in seismology and have been exploited primarily to reduce the amount of data recorded by data logging systems to manageable levels. In the present era of inexpensive digital storage, however, such algorithms are increasingly being used to flag signal segments in continuously recorded digital data streams for subsequent processing by automatic and/or expert interpretation systems. In the course of our development of an automated, near-real-time, waveform correlation event-detection and location system (WCEDS), we have surveyed the abilities of such algorithms to enhance seismic phase arrivals in teleseismic data streams. Specifically, we have considered envelopes generated by energy transient (STA/LTA), Z-statistic, frequency transient, and polarization algorithms. The WCEDS system requires a set of input data streams that have a smooth, low-amplitude response to background noise and seismic coda and that contain peaks at times corresponding to phase arrivals. The algorithm used to generate these input streams from raw seismograms must perform well under a wide range of source, path, receiver, and noise scenarios. Present computational capabilities allow the application of considerably more robust algorithms than have been historically used in real time. However, highly complex calculations can still be computationally prohibitive for current workstations when the number of data streams become large. While no algorithm was clearly optimal under all source, receiver, path, and noise conditions tested, an STA/LTA algorithm incorporating adaptive window lengths controlled by nonstationary seismogram spectral characteristics was found to provide an output that best met the requirements of a global correlation-based event-detection and location system.
Twitter has received much attention recently. An important characteristic of Twitter is its real-time nature. We investigate the real-time interaction of events such as earthquakes in Twitter and propose an … Twitter has received much attention recently. An important characteristic of Twitter is its real-time nature. We investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center of the event location. We regard each Twitter user as a sensor and apply particle filtering, which are widely used for location estimation. The particle filter works better than other comparable methods for estimating the locations of target events. As an application, we develop an earthquake reporting system for use in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (93 percent of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and notification is delivered much faster than JMA broadcast announcements.
Abstract This supersedes Paper 1 (Gutenberg and Richter, 1942). Additional data are presented. Revisions involving intensity and acceleration are minor. The equation log a = I/3 − 1/2 is retained. … Abstract This supersedes Paper 1 (Gutenberg and Richter, 1942). Additional data are presented. Revisions involving intensity and acceleration are minor. The equation log a = I/3 − 1/2 is retained. The magnitude-energy relation is revised as follows: (20) log ⁡ E = 9.4 + 2.14 M − 0.054 M 2 A numerical equivalent, for M from 1 to 8.6, is (21) log ⁡ E = 9.1 + 1.75 M + log ⁡ ( 9 − M ) Equation (20) is based on (7) log ⁡ ( A 0 / T 0 ) = − 0.76 + 0.91 M − 0.027 M 2 applying at an assumed point epicenter. Eq. (7) is derived empirically from readings of torsion seismometers and USCGS accelerographs. Amplitudes at the USCGS locations have been divided by an average factor of 2 1/2 to compensate for difference in ground; previously this correction was neglected, and log E was overestimated by 0.8. The terms M2 are due partly to the response of the torsion seismometers as affected by increase of ground period with M, partly to the use of surface waves to determine M. If MS results from surface waves, MB from body waves, approximately (27) M S − M B = 0.4 ( M S − 7 ) It appears that MB corresponds more closely to the magnitude scale determined for local earthquakes. A complete revision of the magnitude scale, with appropriate tables and charts, is in preparation. This will probably be based on A/T rather than amplitudes.
We have developed a semi-automated method of determining accurate relative phase arrival times and uncertainty estimates for teleseisms recorded on regional networks. Our analysis begins by obtaining preliminary arrival times … We have developed a semi-automated method of determining accurate relative phase arrival times and uncertainty estimates for teleseisms recorded on regional networks. Our analysis begins by obtaining preliminary arrival times with a single-trace phase-picking algorithm. For each possible pair of traces we then perform a search for the maximum of their cross-correlation function in order to obtain relative delay times. Depending on event magnitude, the best results are obtained by using correlation windows containing 2 to 4 sec of the initial energy pulse of the phase. The cross-correlation derived delay times are then used to generate an overdetermined system of linear equations whose solution is an optimized set of relative arrival times. We solve for these times using least squares. Cycle skipping is eliminated through the automatic re-evaluation of cross-correlation functions which yield high equation residuals. Quantitative estimates of timing uncertainty are obtained from the variance of equation residuals associated with each trace. Using data from the Washington Regional Seismograph Network, we have found that for reasonably high-quality events, the rms uncertainty in arrival time estimates is on the order of the sample interval (0.01 sec). Reproducibility of delay anomalies is excellent for events from the same geographic locations despite significant differences in waveform character. In connection with a study of the deep structure of the Cascadia Subduction Zone, we include a preliminary examination of the variation of residual patterns with azimuth.
abstract Automatic phase-picking algorithms are designed to detect a seismic signal on a single trace and to time the arrival of the signal precisely. Because of the requirement for precise … abstract Automatic phase-picking algorithms are designed to detect a seismic signal on a single trace and to time the arrival of the signal precisely. Because of the requirement for precise timing, a phase-picking algorithm is inherently less sensitive than one designed only to detect the presence of a signal, but still can approach the performance of a skilled analyst. A typical algorithm filters the input data and then generates a function characterizing the seismic time series. This function may be as simple as the absolute value of the series, or it may be quite complex. Event detection is accomplished by comparing the function or its short-term average (STA) with a threshold value (THR), which is commonly some multiple of a long-term average (LTA) of a characteristic function. If the STA exceeds THR, a trigger is declared. If the event passes simple criteria, it is reported. Sensitivity, expected timing error, false-trigger rate, and false-report rate are interrelated measures of performance controlled by choice of the characteristic function and several operating parameters. At present, computational power limits most systems to one-pass, time-domain algorithms. Rapidly advancing semi-conductor technology, however, will make possible much more powerful multi-pass approaches incorporating frequency-domain detection and pseudo-offline timing.
abstract A computer program has been developed for the automatic detection and timing of earthquakes on a single seismic trace. The program operates on line and is sufficiently simple that … abstract A computer program has been developed for the automatic detection and timing of earthquakes on a single seismic trace. The program operates on line and is sufficiently simple that it is expected to work in inexpensive low-power microprocessors in field applications. In tests with analog tapes of earthquakes, the program correctly identified and timed to within 0.05 sec about 70 per cent of the events which would normally be timed in operation of a network. The program evaluates the accuracy of its picks, and its estimates appear to be quite reliable. The algorithm is working at present in a 16-bit minicomputer and appears to be compatible with presently available microprocessors.
ConvNetQuake is the first neural network for detection and location of earthquakes from seismograms. ConvNetQuake is the first neural network for detection and location of earthquakes from seismograms.
As the number of seismic sensors grows, it is becoming increasingly difficult for analysts to pick seismic phases manually and comprehensively, yet such efforts are fundamental to earthquake monitoring. Despite … As the number of seismic sensors grows, it is becoming increasingly difficult for analysts to pick seismic phases manually and comprehensively, yet such efforts are fundamental to earthquake monitoring. Despite years of improvements in automatic phase picking, it is difficult to match the performance of experienced analysts. A more subtle issue is that different seismic analysts may pick phases differently, which can introduce bias into earthquake locations. We present a deep-neural-network-based arrival-time picking method called "PhaseNet" that picks the arrival times of both P and S waves. Deep neural networks have recently made rapid progress in feature learning, and with sufficient training, have achieved super-human performance in many applications. PhaseNet uses three-component seismic waveforms as input and generates probability distributions of P arrivals, S arrivals and noise as output. We engineer PhaseNet such that peaks in the probability distributions provide accurate arrival times for both P and S waves. PhaseNet is trained on the prodigious available data set provided by analyst-labelled P and S arrival times from the Northern California Earthquake Data Center. The data set we use contains more than 700 000 waveform samples extracted from over 30 yr of earthquake recordings. We demonstrate that PhaseNet achieves much higher picking accuracy and recall rate than existing methods when applied to the waveforms of known earthquakes, which has the potential to increase the number of S-wave observations dramatically over what is currently available. This will enable both improved locations and improved shear wave velocity models.
Preface 1. The scope of seismology 2. Elasticity theory 3. Vibrations and waves 4. Body elastic waves 5. Surface elastic waves and eigen-vibrations of a sphere 6. Reflection and refraction … Preface 1. The scope of seismology 2. Elasticity theory 3. Vibrations and waves 4. Body elastic waves 5. Surface elastic waves and eigen-vibrations of a sphere 6. Reflection and refraction of elastic waves 7. Seismic rays in a spherically stratified Earth model 8. Amplitudes of the surface motion due to seismic waves in a spherically stratified Earth model 9. Seismometry 10. Construction of travel-time tables 11. The seismological observatory 12. Seismic waves in anomalous structures 13. Seismic waves and planetary interiors 14. Long-period oscillations and the Earth's interior 15. Earthquake statistics and predictions 16. The earthquake source 17. Strong-motion seismology Appendix Selected bibliography References Unit conversion table Index.
Abstract Earthquake signal detection and seismic phase picking are challenging tasks in the processing of noisy data and the monitoring of microearthquakes. Here we present a global deep-learning model for … Abstract Earthquake signal detection and seismic phase picking are challenging tasks in the processing of noisy data and the monitoring of microearthquakes. Here we present a global deep-learning model for simultaneous earthquake detection and phase picking. Performing these two related tasks in tandem improves model performance in each individual task by combining information in phases and in the full waveform of earthquake signals by using a hierarchical attention mechanism. We show that our model outperforms previous deep-learning and traditional phase-picking and detection algorithms. Applying our model to 5 weeks of continuous data recorded during 2000 Tottori earthquakes in Japan, we were able to detect and locate two times more earthquakes using only a portion (less than 1/3) of seismic stations. Our model picks P and S phases with precision close to manual picks by human analysts; however, its high efficiency and higher sensitivity can result in detecting and characterizing more and smaller events.
Although deep learning has historical roots going back decades, neither the term "deep learning" nor the approach was popular just over five years ago, when the field was reignited by … Although deep learning has historical roots going back decades, neither the term "deep learning" nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton's now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.
<title>Abstract</title> Earthquake Early Warning (EEW) systems can provide crucial seconds to tens of seconds of advanced notice before the arrival of destructive seismic waves. Their effectiveness, however, depends on how … <title>Abstract</title> Earthquake Early Warning (EEW) systems can provide crucial seconds to tens of seconds of advanced notice before the arrival of destructive seismic waves. Their effectiveness, however, depends on how quickly and accurately each system forecasts ground motion using real-time data. This study compares the performance of two distinct EEW algorithms (PLUM, a pure impact-based approach, and a P-wave Shaking Forecast Based EEWS (QuakeUp), which is a hybrid method) by simulating the 2016 M<sub>jma</sub> 6.6 Central Tottori earthquake. PLUM<sup>1</sup> (modified by Kagawa<sup>2</sup>) predicts earthquake shaking based solely on observed seismic intensities. Once a station records intensity above a threshold, PLUM propagates that intensity to nearby locations, making it robust for complex ruptures but reliant on stations having already experienced strong motion. QuakeUp<sup>3</sup>, by contrast, integrates P-wave amplitude measurements with rapid estimates of earthquake location and magnitude; this allows it to forecast shaking before stations register significant ground motion but can introduce uncertainties until the evolving source parameters stabilize. Our offline “playback” of the Central Tottori event demonstrates that PLUM offers consistently high alert accuracy (≥ 90%) once threshold is exceeded but provides limited lead-times for sites near the epicenter. In some cases, stations up to 30 km from the source received effective warnings too late for significant protective actions. Conversely, QuakeUp could issue alerts as early as 3 s after the origin time, yielding up to 12 s of lead-time at 40 km. However, its accuracy briefly dropped to 64% before converging to 100% by around 12 s, reflecting the time required for magnitude and location estimates to mature. In addition, earthquake magnitude and location parameters are provided as a byproduct information. Despite these differences, both algorithms delivered reliable alerts across much of Tottori Prefecture. The results highlight how algorithmic design and station coverage influence warning performance, with PLUM excelling under dense station networks and QuakeUp offering broader, earlier coverage where real-time source parameters can be accurately constrained.
Abstract Geodetic observations along convergent margins have achieved unprecedented resolution in detailing deformation associated with earthquake cycles. A comprehensive understanding of how to best interpret these data for forecasting remains … Abstract Geodetic observations along convergent margins have achieved unprecedented resolution in detailing deformation associated with earthquake cycles. A comprehensive understanding of how to best interpret these data for forecasting remains crucial. Here, we combined analogue seismo‐tectonic models of a megathrust and Explainable Artificial Intelligence (XAI) to characterize the link between earthquakes and deformation. We utilized deformation features to train convolutional neural networks (CNN) that forecast the time left before a laboratory earthquake. We then used Integrated Gradients (IG), an XAI technique, to identify areas and features contributing to model forecasts. CNNs perform better compared to decision trees utilizing sparse point‐wise features highlighting the importance of spatial patterns. IG reveals the significance of trench‐perpendicular deformation downdip the rupturing asperity, trench‐parallel deformation inland, and local deformation curl, in forecasting rupture timing. These emphasize the need for dense networks to monitor deformation and suggest patterns that may signify rupture imminence in convergent margins.
Rapid and accurate detection of primary waves (P-waves) using high-rate Global Navigation Satellite System (GNSS) data is essential for earthquake monitoring and tsunami early warning systems, where traditional seismic methods … Rapid and accurate detection of primary waves (P-waves) using high-rate Global Navigation Satellite System (GNSS) data is essential for earthquake monitoring and tsunami early warning systems, where traditional seismic methods are less effective in noisy environments. We applied a wavelet-based method using a Mexican hat wavelet and dynamic threshold to thoroughly analyze the three-component displacement waveforms of the 2009 Padang, 2012 Simeulue, and 2018 Palu Indonesian earthquakes. Data from the Sumatran GPS Array and Indonesian Continuously Operating Reference Stations were analyzed to determine accurate displacements and P-waves. Validation with Indonesian geophysical agency seismic records indicated reliable detection of the horizontal component, with a time delay of less than 90 s, whereas the vertical component detection was inconsistent, owing to noise. Spectrogram analysis revealed P-wave energy in the pseudo-frequency range of 0.02–0.5 Hz and confirmed the method’s sensitivity to low-frequency signals. This approach illustrates the utility of GNSS data as a complement to seismic networks for the rapid characterization of earthquakes in complex tectonic regions. Improving the vertical component noise suppression might further help secure their utility in real-time early warning systems.
Abstract Seismic data quality assessment (QA) is the first and one of the most important steps before conducting any further data analysis. Traditional methods involve checking various metrics, such as … Abstract Seismic data quality assessment (QA) is the first and one of the most important steps before conducting any further data analysis. Traditional methods involve checking various metrics, such as spike detection and power spectral density, by setting strict thresholds or comparing data against synthetic benchmarks. However, these approaches often rely on pre-existing knowledge and assumptions about data anomalies, leading to potential misclassification of unusual cases. Here, we propose a deep autoencoder model, an unsupervised learning approach that evaluates data quality without making assumptions about normal and anomalous data, which can be used to identify deviations in recorded data that may indicate nascent instrument failure. We test the model with the U.S. International Monitoring System (IMS) seismic stations and demonstrate the capability of detecting anomalies on a monthly scale. This could prompt station operators to examine potential problems early, allowing sufficient time for instrument maintenance to prevent data outages. In addition, we use a new manually selected testing dataset to compare our model performance against two supervised machine learning (ML) approaches and a standard QA package, as baseline models. When applied to the dataset containing known data anomalies, performance of the supervised and unsupervised ML approaches is similar, with an accuracy of 88.1% for our model compared to ∼90% for the supervised ML approach and 78.2% for the standard QA package. Our model outperforms the baseline models when applied to new stations, where new types of data anomalies can be station-specific and not included in the training dataset. Finally, we show model transferability by training the model with data from the Global Seismograph Network only and applying it to the IMS network data. The results suggest that our model is generalizable and can be applied to new stations with good accuracy.
This study evaluates the performance of Synthetic Template Matching for seismic event detection in the West Bohemia region (Czechia), comparing it with two established methods: the automated detector-locator PEPiN and … This study evaluates the performance of Synthetic Template Matching for seismic event detection in the West Bohemia region (Czechia), comparing it with two established methods: the automated detector-locator PEPiN and an Artificial Neural Network. Synthetic templates are generated using a 1D velocity model and span a grid of five fundamental focal mechanisms (FMs), independent of any prior waveform or FM knowledge. The resulting catalog includes origin time, similarity, magnitude, location, number of detecting templates, and interpreted focal mechanism. In WEBNET data, Synthetic Template Matching with a cross-correlation threshold of 0.4 detected 264 events with a completeness magnitude of Mc​=−0.1. All the detected seismicity is real and local, and the interpreted FMs (within the seismic network) predominantly align with strike-slip events. Although the method does not outperform PEPiN or the Artificial Neural Network in terms of Mc​, it reliably estimates focal mechanisms and epicentral locations.
Presented on 29 May 2025: Session 26 Geological energy storage and carbon sequestration activities should consider the stability of surrounding faults and induced seismicity potential. In order to ensure the … Presented on 29 May 2025: Session 26 Geological energy storage and carbon sequestration activities should consider the stability of surrounding faults and induced seismicity potential. In order to ensure the efficacy of storage medium, it is crucial to possess a comprehensive understanding of the movement of pressure plumes within geological features by monitoring the potential impact on the deformation of geological layers as well as the ground surface. In this study, we propose a new tool (machine learning inversion solution, MLIS) capable of identifying opening (dilation) and shearing behaviour of faults and fractures pressurised by a fluid plume. While geo-storage of energy and CO2 is mainly dominant with the dilational deformation, any fault slippage generates shear deformation. Combination of the two creates a mixed-mode deformation detectable via an array of tiltmeters, fibre-optic strain sensors, or Interferometric Synthetic Aperture Radar (InSAR). MLIS utilises surrogate models trained specifically for dilation and shear deformations, together with Bayesian inversion and differential evolution optimisation to identify the set of unknown parameters that gives the best fit to the observed data. To access the Oral Presentation click the link on the right. To read the full paper click here
<title>Abstract</title> Forecasting groundwater level fluctuations induced by seismic activity presents a considerable challenge due to the inherent complexity and pronounced non-linearity of the underlying processes. The limited availability of predictive … <title>Abstract</title> Forecasting groundwater level fluctuations induced by seismic activity presents a considerable challenge due to the inherent complexity and pronounced non-linearity of the underlying processes. The limited availability of predictive variables further complicates this task, with key factors such as seismic shaking intensity, geological characteristics of dams, and shear wave velocity serving as primary indicators. To address the scarcity of predictive features and the intricate non-linear dependencies between input variables and groundwater level responses, we introduce an innovative fusion of feature engineering and machine learning. Our methodology is applied to a comprehensive regional-scale, multi-site, multi-earthquake dataset from New Zealand aquifers. Utilizing a filter-based supervised feature selection technique, we extract novel feature sets with strong correlations to groundwater level dynamics. Subsequently, we develop a random forest classification model to predict earthquake-induced groundwater level changes. The proposed approach significantly enhances both predictive accuracy and interpretability compared to conventional probabilistic models, offering a robust framework for improved seismic hydrogeological forecasting.
SUMMARY Induced seismicity poses a significant challenge to the safe and sustainable development of Enhanced Geothermal Systems (EGS). This study explores the application of machine learning (ML) for forecasting cumulative … SUMMARY Induced seismicity poses a significant challenge to the safe and sustainable development of Enhanced Geothermal Systems (EGS). This study explores the application of machine learning (ML) for forecasting cumulative seismic moment (CSM) of induced seismic events to evaluate reservoir stability in response to fluid injections. Using data from the Cooper Basin (Australia), the St1 Helsinki geothermal project (Finland), and a controlled laboratory injection experiment, we evaluate ML models that integrate catalogue and operational features with various frameworks. Results indicate that feature-rich models outperform simpler ones in complex seismic environments like the Cooper Basin and laboratory cases, where seismicity is promoted by earthquake interaction and fault reactivation. However, in scenarios like St1 Helsinki, with minimal event clustering, additional features offer limited predictive benefits. While ML models are promising, several challenges impede reliable forecasting, including data scarcity from operational wells, the extrapolation demands of cumulative output (i.e. CSM) and the difficulty of predicting abrupt CSM increases for large seismic events. Enhancing model robustness requires synthetic data augmentation and improved feature selection capable of capturing diverse reservoir dynamics. These advancements may enable more accurate near real-time forecasts of problematic induced seismic events, informing operational decisions to mitigate seismic risks while maximizing energy extraction, and hence offering a pathway for broader adoption of ML in renewable energy development and management.
Earthquake Early Warning (EEW) systems aim to alert users in advance of imminent shaking, enabling them to take action. In collaboration with local seismic agencies, the Swiss Seismological Service (SED), … Earthquake Early Warning (EEW) systems aim to alert users in advance of imminent shaking, enabling them to take action. In collaboration with local seismic agencies, the Swiss Seismological Service (SED), has developed national EEW systems across Central America. Public EEW alerts are now available, considering the frequent seismic activity and the vulnerability of the building stock, EEW has the potential to reduce casualties (i.e. fatalities and injuries). In this study, we build upon a probabilistic framework to quantify the potential benefits of EEW systems in reducing casualties. For each event generated in the stochastic catalog (100,000 event sets), we estimate the number of casualties in the absence of EEW. The framework evaluates the potential casualty reduction attributable to an operational EEW system, considering the expected warning times in each event at the target site, the subsequent actions taken upon receiving the alert, and system performance. For a return period of 475 years, the fatality reduction could reach ∼14% to 17% corresponding to hundreds fewer fatalities in Costa Rica and Nicaragua, and thousands fewer fatalities in El Salvador and Guatemala. From this baseline scenario, we explore strategies to improve casualty reduction: (1) increase warning time by densifying the seismic network; and (2) compare the effectiveness of Drop, Cover, And Hold On (DCHO) versus evacuation as recommended protective actions. Our results suggest that evacuation is a suitable strategy for reducing fatalities in this region, given the prevalence of single-story structures. Given the available warning time, evacuation is advised for occupants on the first floor, and those on upper floors should adopt DCHO. Our findings indicate that the implementation of EEW leads to a ∼10% reduction in average annual fatalities. A cost–benefit analysis reveals that the economic benefits of public EEW systems significantly outweigh the associated costs, making EEW a cost-effective mitigation strategy.
ABSTRACT The goal of earthquake early warning (EEW) is to send alerts to the public before shaking arrives at their location to allow time to prepare and mitigate the chance … ABSTRACT The goal of earthquake early warning (EEW) is to send alerts to the public before shaking arrives at their location to allow time to prepare and mitigate the chance of negative outcomes. Nevada is the third most seismically active state in the United States with a large population living in high hazard areas. ShakeAlert, the EEW system active for the west coast of the United States, does not currently support alerts in Nevada, but a future expansion to the state could provide potentially lifesaving benefits to its residents. The first step toward including Nevada in ShakeAlert is analyzing performance metrics relevant to EEW based on the current network geometry and identifying potential improvements. Through systematic analyses of 34 earthquake scenarios, alongside station configuration, grid density testing, and network upgrade scores, we objectively quantify expected warning times and potential improvements while identifying the optimal locations to install new stations. We find that incorporating existing stations from Nevada could provide actionable warning times (at least five seconds and often greater) to Nevada residents for representative earthquake scenarios in the state while also improving warning times to California by about five seconds for events located near the state border. Installation of new stations to further densify the network improves potential warning times, with the recommended density of 20 km station spacing in western Nevada providing on average an additional five seconds of warning with a relatively modest number of new station installations.
On-fault geological observations from surface breaking earthquakes typically contain curved slickenlines, suggesting fault slip is curved. However, slickenlines commonly record only a fraction of coseismic slip, making it difficult to … On-fault geological observations from surface breaking earthquakes typically contain curved slickenlines, suggesting fault slip is curved. However, slickenlines commonly record only a fraction of coseismic slip, making it difficult to reconstruct the full slip trajectory. Near-fault seismic records, though capable of capturing ground motions associated with rupture, are limited in their ability to constrain on-fault slip direction as they record motion on only one side of the fault. Here, we overcome these challenges by directly observing fault slip using video footage of the 2025 Mw 7.7 Mandalay (Myanmar) strike-slip earthquake. We use pixel cross correlation to track features in successive frames of the video, revealing a pulse of fault slip with a magnitude of 2.5±0.5 m, duration of 1.3±0.2 s, and peak velocity of 3.2±1.0 m/s. The observed trajectory is notably curved, and includes a transient (0.3 m) dip slip component on a steeply dipping strike-slip fault. These observations are consistent with geological records of curved slickenlines supporting mechanical models that link rupture propagation direction to near-surface slip curvature. Our results provide the first direct visual evidence of curved coseismic fault slip, bridging a critical gap among seismological observations, geological data, and theoretical models.
This study aimed to propose a probability-guaranteed spectrum method to enhance the reliability of seismic building designs, thereby addressing the inadequacy of the current code-specified response spectrum based on mean … This study aimed to propose a probability-guaranteed spectrum method to enhance the reliability of seismic building designs, thereby addressing the inadequacy of the current code-specified response spectrum based on mean fortification levels. This study systematically evaluated the fitting performance of dynamic coefficient spectra under normal, log-normal, and gamma distribution assumptions based on 288 ground motion records from type II sites. MATLAB(2010) parameter fitting and the Kolmogorov–Smirnov test were used, revealing that the gamma distribution optimally characterized spectral characteristics across all period ranges (p &lt; 0.05). This study innovatively established dynamic coefficient spectra curves for various probability guarantee levels (50–80%), quantitatively revealing the insufficient probability assurance of code spectra in the long-period range. Furthermore, this study proposed an evaluation framework for load safety levels of spectral values over the design service period, demonstrating that increasing probability guarantee levels significantly improved safety margins over a 50-year reference period. This method provides probabilistic foundations for the differentiated seismic design of important structures and offers valuable insights for revising current code provisions based on mean spectra.
This paper explores the application of convolutional neural networks (CNNs) in predictive modelling for seismic events and flood risks, with a particular focus on forecasting extreme quantile events that exceed … This paper explores the application of convolutional neural networks (CNNs) in predictive modelling for seismic events and flood risks, with a particular focus on forecasting extreme quantile events that exceed historical data limits. Traditional risk assessment methods often struggle to estimate such extremes, highlighting the need for more advanced predictive models capable of handling rare but high-impact events. This research enhances CNN architecture to improve accuracy in high quantile predictions by integrating multi-source spatiotemporal data, addressing a critical research gap. The methodology involves incorporating diverse datasets, including geospatial, meteorological, and historical seismic or flood records, into CNN models to augment predictive capabilities. These models undergo systematic validation using historical events and real-world data to assess their reliability, robustness, and practical relevance. Furthermore, the study evaluates the potential of these advanced prediction models to inform disaster risk management and mitigation strategies. By leveraging deep learning techniques and optimizing CNN structures, this research aims to refine forecasting precision, supporting proactive disaster preparedness. The anticipated outcome is an improved predictive framework that enhances early warning systems, facilitates informed decision-making, and strengthens emergency response mechanisms. Ultimately, this study contributes to the broader goal of increasing resilience against natural disasters by equipping policymakers, emergency responders, and urban planners with more accurate and timely risk assessments.
In early 2025, the Santorini–Amorgos area (Aegean Volcanic Arc, Greece) experienced a seismic swarm, with dozens of M ≥ 4.0 earthquakes and a maximum magnitude of M = 5.2. Beyond … In early 2025, the Santorini–Amorgos area (Aegean Volcanic Arc, Greece) experienced a seismic swarm, with dozens of M ≥ 4.0 earthquakes and a maximum magnitude of M = 5.2. Beyond its seismological interest, the sequence was notable for triggering rare increased preparedness actions by Greek Civil Protection operational structures in anticipation of an imminent destructive earthquake. These actions included (i) risk communication, (ii) the reinforcement of operational structures with additional personnel and equipment on the affected islands, (iii) updates to local emergency plans, (iv) the dissemination of self-protection guidance, (v) the activation of emergency alert systems, and (vi) volunteer mobilization, including first aid and mental health first aid courses. Although it was in line with contingency plans, public participation was limited. Volunteers helped bridge this gap, focusing on vulnerable groups. The implemented actions in Greece are also compared with increased preparedness during the 2024–2025 seismic swarms in Ethiopia, as well as preparedness before the highly anticipated major earthquake in Istanbul (Turkey). In Greece and Turkey, legal and technical frameworks enabled swift institutional responses. In contrast, Ethiopia highlighted the risks of limited preparedness and the need to embed disaster risk reduction in national development strategies. All cases affirm that preparedness, through infrastructure, planning, communication, and community engagement, is vital to reducing earthquake impacts.
Disaster warning and emergency response are an emerging field of HPC/cloud computing which is now known as urgent computing . This means that computing is required to be performed within … Disaster warning and emergency response are an emerging field of HPC/cloud computing which is now known as urgent computing . This means that computing is required to be performed within short time scales. In this regard, the timeliness for early warning of the population is the foremost important point well before accuracy, as long as the predictions are sufficiently accurate, hence a coarser level of precision is acceptable to that end. It also processes a large amount of data and provides well-targeted, high-resolution, and highly reliable information to emergency management stakeholders. At an early stage, it is also important to take into account the uncertainties surrounding the disaster parameters. To do this, the simulation process must include a range of possible values for each of the main input parameters. Following this principle, we hereby present an impact assessment workflow that is designed to provide building-specific damage assessments caused by earthquakes and possible subsequent tsunamis. We target short run-times to guarantee that results are provided as needed for emergency response decisions and that a proper impact assessment is provided, allowing for meaningful planning of rescue actions. We show the design principles coming from a time-aware model of computation, to the specification of the workflow, and the individual programs that compose it. A working prototype was developed as part of the LEXIS project. This paper also includes preliminary experiments toward robust urgent computing, utilizing distributed, heterogeneous HPC and cloud infrastructures.
Volcano seismicity is often detected and classified based on its spectral properties. However, the wide variety of volcano seismic signals and increasing amounts of data make accurate, consistent, and efficient … Volcano seismicity is often detected and classified based on its spectral properties. However, the wide variety of volcano seismic signals and increasing amounts of data make accurate, consistent, and efficient detection and classification challenging. Machine learning (ML) has proven very effective at detecting and classifying tectonic seismicity, particularly using Convolutional Neural Networks (CNNs) and leveraging labeled datasets from regional seismic networks. Progress has been made applying ML to volcano seismicity, but efforts have typically been focused on a single volcano and are often hampered by the limited availability of training data. We build on the method of Tan et al. [2024] (10.1029/2024JB029194) to generalize a spectrogram-based CNN termed the VOlcano Infrasound and Seismic Spectrogram Neural Network (VOISS-Net) to detect and classify volcano seismicity at any volcano. We use a diverse training dataset of over 270,000 spectrograms from multiple volcanoes: Pavlof, Semisopochnoi, Tanaga, Takawangha, and Redoubt volcanoes\replaced (Alaska, USA); Mt. Etna (Italy); and Kīlauea, Hawai`i (USA). These volcanoes present a wide range of volcano seismic signals, source-receiver distances, and eruption styles. Our generalized VOISS-Net model achieves an accuracy of 87 % on the test set. We apply this model to continuous data from several volcanoes and eruptions included within and outside our training set, and find that multiple types of tremor, explosions, earthquakes, long-period events, and noise are successfully detected and classified. The model occasionally confuses transient signals such as earthquakes and explosions and misclassifies seismicity not included in the training dataset (e.g. teleseismic earthquakes). We envision the generalized VOISS-Net model to be applicable in both research and operational volcano monitoring settings.
&amp;lt;p&amp;gt;Geodesy and its high precision are important instruments for the study of active tectonics and the presentation of the movement of solid parts of the earth. Deformations caused by earthquakes … &amp;lt;p&amp;gt;Geodesy and its high precision are important instruments for the study of active tectonics and the presentation of the movement of solid parts of the earth. Deformations caused by earthquakes represent essential information for defining seismogenic zones. Precise measurements must be made on the wall of the fault itself or the system of connected active faults to measure the rate of deformation of the earth&amp;#039;s crust between, during, and after earthquakes. In Bosnia and Herzegovina, the spatial density of GNSS stations used in modern geodynamic studies is low. The permanent GNSS station &amp;quot;SRJV&amp;quot; in Sarajevo is the only permanent GNSS station in the region. It is part of the EUREF Permanente GNSS network and, in that segment, has up-to-date available time series from GNSS coordinates.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;The paper is dedicated to the modeling of tectonic movements based on GNSS coordinate time series, which were analyzed using the Kalman filter. The research area includes the territory of … &amp;lt;p&amp;gt;The paper is dedicated to the modeling of tectonic movements based on GNSS coordinate time series, which were analyzed using the Kalman filter. The research area includes the territory of Japan, which is one of the most seismically active regions on Earth. The devastating Tohoku earthquake of 2011 was the result of subduction between the Pacific and North American plates. Different offsets were observed by analyzing the time series of GNSS coordinates. The intensity of the offset caused by the Tohoku earthquake is proportional to the distance of the observed station from the epicentre of the earthquake. The horizontal and vertical movements of Honshu Island are not homogeneous, which results from the fact that the GNSS stations are located on different tectonic plates.&amp;lt;/p&amp;gt; &amp;lt;p class=&amp;quot;Abstract&amp;quot;&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;/p&amp;gt;
David H. Smith | INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
Abstract - Background removal (portrait matting) is an important pre-processing step for photography, augmented reality, and videography. Traditional chroma-key tools require manual trimaps or controlled lighting, limiting flexibility. We propose … Abstract - Background removal (portrait matting) is an important pre-processing step for photography, augmented reality, and videography. Traditional chroma-key tools require manual trimaps or controlled lighting, limiting flexibility. We propose an automated deep learning solution that uses the MODNet portrait matting model in a Streamlit-based web application. Users upload a subject image (e.g. human on green/any background) and a new background image; the system runs MODNet to compute an alpha matte, applies green-spill correction, and overlays the foreground onto the new background. The UI includes an interactive canvas for adjusting the subject’s position and a download button for the final composite. We compare MODNet’s performance to other segmentation baselines (DeepLabv3, U²-Net as used in Rembg) in terms of mask quality. Evaluation metrics such as Intersection-over-Union (IoU) and pixel accuracy are used to quantify performance. Results indicate that MODNet produces superior edge delineation and photo-realistic composites compared to standard segmentation approaches, providing an effective and user-friendly background removal tool. . Key Words: Background removal, portrait matting, image segmentation, MODNet, DeepLabv3, U²-Net, Streamlit, interactive interface
Seismic facies analysis, essential for subsurface geological exploration, has traditionally challenged the ability to capture subtle variations in complex stratigraphic environments. This study uses spectral decomposition and unsupervised machine learning, … Seismic facies analysis, essential for subsurface geological exploration, has traditionally challenged the ability to capture subtle variations in complex stratigraphic environments. This study uses spectral decomposition and unsupervised machine learning, specifically the Kohonen Self-Organizing Map, to improve the identification of detailed seismic facies. Spectral decomposition enables frequency-based seismic data analysis, capturing intricate geological features often missed by traditional methods. The Continuous Wavelet Transform was applied to decompose seismic signals, and the resulting frequency components were clustered using a Self-Organizing Map to classify seismic facies. This paper validated this approach using seismic data from the South Caspian Basin. The results successfully identified channel systems and facies boundaries, enhancing their delineation and enabling a more accurate interpretation of channel systems and their internal variability. This automated methodology offers valuable insights for reservoir characterization and hydrocarbon exploration, potentially reducing exploration risks and enhancing resource estimation
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to … In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization of tensor (LFT)-based method has proven effective for such problems, with optimization typically achieved via stochastic gradient descent (SGD). However, SGD-based LFT models and other imputation methods exhibit significant sensitivity to learning rates and slow tail-end convergence. To address these limitations, this study proposes an RMSprop-incorporated latent factorization of tensor (RLFT) model, which integrates an adaptive learning rate mechanism to dynamically adjust step sizes based on gradient magnitudes. Experimental validation on a scaled bridge accelerometer dataset demonstrates that RLFT achieves faster convergence and higher imputation accuracy compared to state-of-the-art models including SGD-based LFT and the long short-term memory (LSTM) network, with improvements of at least 10% in both imputation accuracy and convergence rate, offering a more efficient and reliable solution for missing data handling in SHM.