Toward Earthquake Early Warning in Northern California

Gilead Wurman and Richard M. Allen

Introduction

Although there have been recent advancements in the theory behind Earthquake Early Warning (EEW), there remain several challenges to the implementation of such EEW systems in Northern California. We have been working toward a functional EEW system since August 2005, and as of February 2006 an offline version of ElarmS (Earthquake Alarm System) has been running automatically, in a non-interactive fashion, following every event of $M \geq 3$ in Northern California.

The non-interactive processing of these events has provided us with valuable data regarding the performance of ElarmS in real scenarios, particularly for events in and around the Bay Area. We present the statistical performance of this non-interactive processing, and discuss two specific events which reflect possible major earthquake scenarios.

Challenges of Northern California

Implementing EEW in Northern California presents specific challenges that have required improvements to the ElarmS methodology used in Southern California. We must integrate data from broadband velocity instruments and strong-motion accelerometers spread across two networks: the Northern California Seismic Network (NCSN) and the Berkeley Digital Seismic Network (BDSN). The disposition of stations in Northern California is shown in Figure 12.1.

The methodology by which ElarmS estimates an earthquake's magnitude relies in part on the measurement of maximum predominant period, $\tau _{p}^{max}$ (Olson and Allen, 2005). However, this method is very susceptible to noise pollution, which is particularly problematic for events smaller than $M \sim 4.5$. To help constrain the magnitudes of small events, we have incorporated a second metric using the peak amplitude of either the displacement ($P_{d}$) or velocity ($P_{v}$) record following similar work in Taiwan (Wu, et al., 2005). The linear average of the two metrics ($P_{d}$ and $\tau _{p}^{max}$ or $P_{v}$ and $\tau _{p}^{max}$) has proven to significantly improve our magnitude estimates for events of all sizes. The $P_{d}$ or $P_{v}$ metric is less susceptible to noise at low magnitudes, but tends to saturate at higher magnitudes. As $\tau _{p}^{max}$ is susceptible to noise but not to saturation, the two metrics complement one another across all magnitudes.

Results of non-interactive processing

Figure 12.1: Map of Northern California showing NCSN and BDSN stations, and 68 events since February 2006.
\begin{figure}\begin{center}
\epsfig{file=wurman06_1_1.eps, width=8cm}\end{center}\end{figure}

Since February of 2006, ElarmS has been running offline, in a non-interactive fashion (meaning with no human input or oversight), after every event in Northern California of $M \geq 3$. The processing is triggered automatically ten minutes after the event notification is received, to allow time for the necessary data to be recorded. Periodically, significant improvements to the ElarmS methodology are incorporated into the non-interactive processing. At that time all the events are reprocessed using the most current version of the code. The reprocessing is done using only the data available at the time of the initial processing, and is still performed non-interactively. Thus, while the reprocessed data does not reflect the fully automated performance of the system, it does reflect how the system would have performed, had the most current version of the code been in place at the time of the events.

As of this writing, a total of 70 instances of non-interactive processing have occurred. Of these, one is a duplicate event, due to the email notification system posting an update to an existing event. One instance was a false event. This was not the result of a false detection by ElarmS, but of an erroneous email notification. The geographic distribution of the remaining 68 events is shown in Figure 12.1. Of these, one event was offshore Mendocino, with no stations within 100 km of the source, and two events suffered system-related processing errors (not resulting from the ElarmS code).

Figure 12.2: Magnitude errors from non-interactive processing. Top: magnitude estimate following first second of p-wave data. Middle: ``alarm-time'' magnitude error, using at least 4 seconds of p-wave data from 4 channels. Bottom: final magnitude error, using all available stations within 100 km.
\begin{figure}\begin{center}
\epsfig{file=wurman06_1_2.eps, width=8cm}\end{center}\end{figure}

The remaining 65 events range in magnitude from $M_{d}$ 2.86 to $M_{L}$ 4.67. The results for these 65 events are presented in Figure 12.2. This figure shows the magnitude errors (with respect to network-based magnitudes, usually $M_{w}$ or $M_{L}$) produced by ElarmS at three different times for each event. The initial magnitude error refers to the magnitude estimation based on only the first second of p-wave data at the first station(s) to detect the event. This is the earliest possible magnitude determination, which can be used to give the maximum warning time. The inital magnitude has a significant scatter ( $\sigma \sim 0.8$ magnitude units) due to its reliance often on a single station's data.

The second plot in Figure 12.2 shows the errors at ``alarm-time'', which we define to be the time at which at least four seconds of p-wave data are available from at least four different channels. This definition is meant to reflect a confidence level in the magnitude which is sufficient to disseminate a public alarm. The magnitude error at this time is considerably less than in the first second ( $\sigma \sim 0.5$ magnitude units). Note that there are fewer events represented in this plot (42 events vs. 65 in the other plots), because not all of the events are ever detected in enough channels to meet the alarm criteria. This is primarily due to the weak signal from small ($M \sim 3$) events, and in some cases results from a lack of enough stations within 100 km of the epicenter.

The lowermost plot in Figure 12.2 shows the error in the final magnitude determination for each event, using all available data from stations within 100 km of the source. Note that the scatter has increased over the previous plot ( $\sigma \sim 0.6$ magnitude units) due to the incorporation of events which did not meet the alarm criteria.

Two scenario events

Figure 12.3: Plots of magnitude error vs. time for two representative events. Top: $M_{L}$ 4.67 Gilroy Earthquake of 15 June, 2006. Bottom: $M_{L}$ 4.7 Santa Rosa Earthquake of 2 August, 2006. Vertical bars represent (as notated) the ``alarm time'' for the event, or the time of arrival of severe ground motions at major urban centers in the Bay Area. Arrival times are based on a velocity of 3.55 km/s.
\begin{figure}\begin{center}
\epsfig{file=wurman06_1_3.eps, width=8cm}\end{center}\end{figure}

Among the 65 events processed non-interactively by ElarmS, two moderate events represent likely hazardous earthquake scenarios for the Bay Area, and thus provide some insight into what can be expected of ElarmS in a real situation. For these two events we use $M_{L}$ as a reference, even though $M_{w}$ values exist for both. This is because $M_{L}$ is sensitive to the same frequencies ($\sim $1-2 Hz) as ElarmS, and because $M_{L}$ is more directly related to the severity of the event in terms of damage to persons and property.

The first event is a $M_{L}$ 4.67 event near Gilroy, CA on 15 June, 2006. This event is located near the southern Calaveras fault, close to the likely epicenter of a Calaveras or Southern Hayward fault event. Figure 12.3 shows the magnitude error vs. time for this event in relation to the arrival time of significant shaking at San Francisco, Oakland and San Jose.

Initial detection of this event by ElarmS occurred 3 seconds after origin, and the initial magnitude estimate was 5.2 one second later. The magnitude estimate came down over the following five seconds due to the incorporation of more data, until the alarm criteria were met at 9 seconds after origin. At that time, the magnitude estimate was 4.44, only 0.23 magnitude units below the actual $M_{L}$. The vertical lines on the plot represent the arrival of severe ground shaking at the three major urban centers in the Bay Area, based on a move-out of 3.55 km/s. San Jose experienced peak ground shaking only 12 seconds after event origin, meaning San Jose would have had about 3 seconds warning time in this event, not considering telemetry and dissemination delays. However, Oakland and San Francisco would have had 20 and 22 seconds of warning, respectively for this event. These warning times depend primarily on the disposition of stations around the epicenter, so they would be comparable for a magnitude 7 event.

The second scenario event is a $M_{L}$ 4.7 event near Santa Rosa on 2 August (local time), 2006. This event is located near the Rodgers Creek fault, near the likely epicenter of a southward-rupturing Rodgers Creek/Hayward fault event. The bottom plot in Figure 12.3 shows the history of this event in the same manner as above.

This event was initially detected 3 seconds after event origin. The initial magnitude, one second later, was rather high, at 6.4. This is a large error, which highlights the utility of waiting for more data to become available rather than issuing the alarm immediately. In the next second, the magnitude dropped to 4.2, and by the time the alarm critera were met at 8 seconds after origin, the magnitude had dropped further to 4.0. At alarm time, both San Francisco and Oakland had 11 seconds until the arrival of severe ground shaking. However, the magnitude estimate at that time was low, and only rose to about 4.6 at 13 seconds after the origin, leaving only 6 seconds of warning for San Francisco and Oakland. San Jose experienced severe ground motions 37 seconds after origin, so even with the additional 5 second delay for the magnitude estimate to rise, it still had 24 seconds of warning in this instance.

The poor initial estimation of the magnitude of the latter event is primarily because most of the stations to the north of the Bay Area are NCSN strong motion stations, which are susceptible to noise pollution below $M \sim 5$. For large earthquakes this is not a problem, but in smaller events high-gain broadband velocity sensors yield superior data.

Conclusion

We are now beginning the process of moving ElarmS from the offline development stage to real-time testing at the Berkeley Seismological Laboratory. Based on its performance on 65 events in a non-interactive offline setting, we expect that ElarmS will perform well under real-time testing, without major modification from its present version.

Acknowledgements

We would like to thank Doug Neuhauser and Bob Uhrhammer for information and discussions related to station equipment and networks. This research was funded by USGS/NEHRP Grant #05HQGR0074.

References

Olson, E.L. and R.M. Allen, The deterministic nature of earthquake rupture, Nature, 438, doi:10.1038/nature04214, 212-215, 2005.

Wu, Y.-M., H.-Y. Yen, L. Zhao, B.-S. Huang, and W.-T. Liang, Magnitude determination using initial P waves: A single-station approach, Geophys. Res. Lett., 33, doi:10.1029/2005GL025395, 2006.

Berkeley Seismological Laboratory
215 McCone Hall, UC Berkeley, Berkeley, CA 94720-4760
Questions or comments? Send e-mail: www@seismo.berkeley.edu
© 2006, The Regents of the University of California