Data Processing

Monitoring the evolution of nonvolcanic tremors and microseismicity, particularly in the SAFOD drilling and target zone, is a primary objective of the HRSN project. In addition, the continued analysis of the HRSN data for determining detailed seismic structure, for the study of similar and characteristic microearthquake systematics, for estimation of deep fault slip rate evolution, and for various studies of fault zone and earthquake physics is also of great interest to seismologists. Before advanced studies of the Parkfield microseismicity can take place, however, initial processing, analysis, and routine cataloging of the earthquake data must be done. An integral part of this process is quality control of the processed data, including a final check of the routine catalog results.

The numerous and ongoing aftershocks from the December 2003 M6.5 San Simeon and Sept. 2004 M6.0 Parkfield earthquakes (Figure 6.5) have seriously complicated the tasks of initial processing, analysis, and location of the routine event catalog, calling for a significant revision of the ``traditional" processing scheme we have used since 1987. Due to the severely limited resources available for reviewing, picking, and locating the tens of thousands of very small microearthquakes that have been detected by the HRSN since the San Simeon and Parkfield quakes, we have opted to focus on cataloging a critical subset of the data (i.e., the similar and characteristically repeating earthquakes). These subsets of data have found particular utility for a variety of applications, and because of their unique properties, these data lend themselves particularly well to automated processing.

Figure 6.5: Shown are the number of HRSN triggers per hour for a period beginning with the San Simeon earthquake and continuing through several months after the Parkfield earthquake. For comparison, before these two large events, the average number of hourly HRSN triggers was less than 0.5 (i.e., about 10 per day). ``Eyeball" fits of the decay curves for both events are also shown. The cumulative number of HRSN triggers in the first 5 months following San Simeon exceeded 70,000 and trigger levels continued to be over 150 triggers per day through to the Parkfield quake. In the first month following the Parkfield quake over 40,000 triggers were recorded. Extrapolation of the decay curve indicates that daily trigger levels will not return to near pre-San Simeon levels until well into 2007. At the same time, funding to support analyst's time for routine processing and cataloging of the events has virtually dried-up, requiring the development and implementation of a new scheme for cataloging events.
\begin{figure}\begin{center}
\epsfig{file=hrsn05_trigs.eps, width=8cm, clip=}\end{center}\end{figure}

Most of our efforts during 2000-2002 were spent on implementing the emergency upgrade and SAFOD expansion of the HRSN. Routine processing of the data collected during that period was deferred until after upgrade and installation efforts were completed. In 2003, we began in earnest the task of routine processing of the ongoing data that was being collected. Our initial focus was on refining and developing our processing procedures to make the task more efficient and to ensure quality control of the processed catalogs. We also began working back in time to fill in the gap that developed during the deferment period. Because routine processing of the post-San Simeon and Parkfield data is effectively impossible at this time due to the overwhelming number of aftershocks, we have suspended our efforts at routine processing the ongoing data using the our existing procedures and have instead focused on development of an automated processing scheme for the similar and repeating events.

The basic scheme that we are now developing for processing these events involves first the compilation of a set of reference event waveforms, picks, and magnitudes. Using the continuous HRSN data now being collected and archived, waveform segments from the reference event catalog are then automatically scanned through the continuous data using cross-correlation sweeps to find waveform matches indicating the time of occurrence of a similar event. In the process a cross-correlation time alignment for the event is also obtained for each station. Following this initial sweep, the time alignments are used automatically to compile event triggered (snippet) data from the continuous records and phase specific cross-correlation alignments are automatically made to obtain fine scale P and S phase picks. These picks are then used to obtain catalog locations, and low-amplitude spectral ratios of the aligned waveforms with the reference event are then made automatically to obtain seismic moment and magnitude estimates.



Subsections

Berkeley Seismological Laboratory
215 McCone Hall, UC Berkeley, Berkeley, CA 94720-4760
Questions or comments? Send e-mail: www@seismo.berkeley.edu
© 2005, The Regents of the University of California