If we Californians didn't know that we are living in earthquake country, and that the question is not "if" but "when" the next Big One will occur, one could become really frightened by the details which went into the latest earthquake forecast model, published last week . The researchers involved in setting up the model used more than 250,000 earthquake scenarios on a total of 350 fault segments to compute where significant shakers are likely to occur in the state during the next 30 years. The computations are so complex that it would have taken a regular desktop computer roughly eight years of continuous number crunching to yield a result. In order not to be outrun by the forecast quake, the scientists used a supercomputer instead, which computed each model in less than an hour.
|Figure 1 ( Adapted from WGCEP, 1988 )|
|Figure 2 ( Adapted from UCERF-3 )|
But despite its enormous detail, the latest "Uniform California Earthquake Rupture Forecast" (UCERF-3) is still a forecast and by no means an earthquake prediction. When talking about the weather, most people in casual conversations don't distinguish between the two terms anymore, forecast and prediction. But when you look closely at the bulletins about future weather issued by the National Weather Service (NWS) you will see the difference. The NWS folks never say "that it will rain exactly 2 inches at Fisherman's Wharf starting at 3 pm tomorrow". That would be prediction. Instead their bulletins read that there will be a 90 percent chance of significant rain at the northern end of the Peninsula in the next 24 hours - which is a forecast.
Such distinctions must also be made in seismology. A meaningful earthquake prediction has to fulfill three criteria: One has to forecast exactly where, when and with what magnitude an earthquake will strike. Despite considerable research efforts in many countries over in recent decades, nobody anywhere has succeeded in getting all three of those criteria consistently right. Hence: The prediction of earthquakes is still an elusive goal. But forecasts like UCERF-3 are the next best thing. The main reason for that is precisely the large amount of detail mentioned earlier: Over the years these rupture forecast models have become much more reliable and should therefore be taken quite seriously.
One example of the improvement is the number of California faults, which are the basis of the models. Figure 1 shows all of the handful of faults used for the first such calculations in 1988 - one year before the magnitude 6.9 Loma Prieta Earthquake ruptured part of the Northern San Andreas Fault. The faults are colored to show the different rates at which their flanks are moving past each other. Yellow is the highest speed with about 1.5 inches per year. Compare this with Figure 2, which shows all the faults and subfaults on which the latest forecast is based. It is much more detailed than the model from almost 30 years ago.
Another reason for the improvement in the forecasts is that seismologists have learned much more about the actual processes taking place during an earthquake. Until recently, nobody could imagine that a rupture during a temblor could jump from one fault to a neighbor. But modern techniques which use seismic waves to illuminate the actual breaking of the rock have shown that such jumps happen regularly - resulting in stronger and more extended earthquakes than previously thought possible. In the next blog, we will look in more detail at the forecast for the faults in our local Bay Area. (hra098)
A new long-term earthquake forecast for California. (Click to view larger.)
The seismoblogger was amazed how some mass media reacted to the latest announcement of earthquake probabilities in California. "Risk of 8.0 earthquake in California rises" read the headline in the Los Angeles Times. Fox News beamed over the TV channels "Risk of mega-earthquake increases" and LA's KABC even went so far as to say the "USGS predicts massive earthquake in California within 30 years". While none of these statements is completely untrue, they certainly leave the impression as if the big one is imminent and that the USGS can predict giant temblors. What gave rise to these alarmist headlines during the last few days?
On Monday the United States Geological Survey (USGS) released the latest findings of a group of nearly 50 Earth scientists from more than a dozen institutions. The goal of this "Working Group on California Earthquake Probabilities" was to estimate and compute just what its name suggests: What are the chances that potentially damaging earthquakes - those with magnitudes larger than 5 - will strike California within the next 30 years. Such efforts to determine the probabilities are not new. The latest release of the findings is the third in series of similar calculations which began in 1995. Such computations are technically called "probabilistic risk analysis". They enable seismologists to calculate the chances that an earthquake of a certain size will strike a certain segment of a fault during a specified time window (older blog: Earthquake Probabilities in the Bay Area). These calculations are based on our knowledge of the past seismicity along the various active faults in California and also on precise measurements of how fast the plates move in our State. As research progresses and our understanding of fault behavior improves over time, the model calculations of probabilities need to be refined every few years.
That is exactly what happened on Monday, when the very complex and detailed new study of the chances of earthquake occurrence on all known 350 fault segments in California was published. But unfortunately, many in the media focused on just one small aspect of the report: The likelihood that a truly big quake of magnitude 8 or greater will occur in California over the next 30 years increased from 4.7% as calculated in the last report to 7%. Is this jump really significant? Shall we lose sleep over this reevaluation of the chances of a mega-quake? When we look at the other numbers in this report, the answer to both questions is simply "no". The chances that during the coming three decades some part of California will get hit by a magnitude 7 earthquake are a whopping 93 percent - which translates to "almost certain". And one step further: We can be sure (larger than 99 percent) that we will have to live through a magnitude 6.7 in the next 30 years.
The only way not to lose sleep over these numbers is to be prepared - and the preparations for an "almost certain" magnitude 7 quake are exactly the same as those for a "very unlikely" (7%) magnitude 8 event. Hints and examples on how to get ready for and survive the shaking can be found in numerous websites. For example, the Red Cross provides earthquake preparedness information. In the next blog, we will discuss in more detail how the experts computed the most recent probabilities. (hra097)
USGS press release
In the seismoblogger's California-centric view of the world, tectonic earthquakes in the US occur mainly along the West Coast and in Alaska, some big ones are generated by the volcanoes in Hawaii, and on a very rare occasion the vast area east of the Rockies is shaken by a significant temblor, like the one on August 23, 2011 when the major population centers in the East rocked to the beat of the waves of the magnitude 5.8 Virgina earthquake.
|Figure 1: A magnitude 6.9 Earthquake in Idaho in 1983 caused a fault scarp up to 8 feet tall (between white arrows). Borah Peak, Idaho's highest mountain, is in the background.|
However, a quake on Saturday morning served as a reminder that the seismogenic zones in the US are much more widespread than is commonly thought. Shortly after 10:44 am MST, most of Idaho was shaken by a magnitude 4.9 temblor, which originated 5 miles under the small town of Challis in Coster County in the center of the state. None of Challis' nearly 1000 inhabitants was hurt, but some houses suffered damage, rockslides blocked a road here and there, and the power was out for hours. The quake was felt 200 miles away in Idaho's capital Boise as well as in Montana's Bitterroot Valley.
According to the Idaho Geological Survey, Saturday's quake was the strongest so far in a series of shakers which began in March 2014. Since then the area around Challis has endured hundreds of slight to moderate temblors, among them a magnitude 3.7 shortly before Christmas. So far, it is not clear to geologists whether the quakes indicate new life on a dormant fault or if they are lined up on a hitherto unknown fault line.
A look into Idaho's earthquake history reveals that the recent series of quakes is by no means abnormal. For instance, on May 12, 1916, Boise was hit by a shock which wrecked chimneys and caused people to rush into the streets. Reclamation ditches were damaged and the flow of natural gas altered. It was felt at Loon Creek, 120 miles northeast, and in eastern Oregon - an area of 50,000 square miles. An intensity VII earthquake occurred within the state on July 12, 1944. The Seafoam Ranger Station building shook so hard the occupants thought it was coming apart. Several people reported that the shaking was so violent they were unable to walk. Another observer reported that rocks rose at least a foot in the air and looked like a series of explosions up the hill. Part of a canyon wall collapsed near Lime Creek. Cracks opened 100 yards long in Duffield Canyon and cracks one to three inches across and several hundred yards long opened on the road below Seafoam. Two chimneys fell at Cascade. This shock was felt over 70,000 square miles, including all of central Idaho, and parts of Washington, Oregon, and Montana.
In fact the strongest earthquake ever to hit Idaho in historic times occurred not far from the current epicenters near Challis, when on October 28, 1983 a magnitude 6.9 (the same strength as the Bay Area's 1989 Loma Prieta quake) occurred under Borah Peak, Idaho's highest mountain. The shaking was so strong that two people were killed in Challis, 11 commercial buildings and 39 private houses sustained major damage, and more than 200 houses sustained minor to moderate damage. Geologically, the most dramatic consequence of this quake was a 20 mile long zone of fault scarps and ground breakage. In some sections the ground was thrown off by more than 8 feet (see figure 1).
The mountainous area of Idaho where the earthquakes occur is part of the Central Idaho Seismic Zone (also called the Centennial Tectonic Belt). The zone is approximately 200 miles long by 50 to 100 miles wide and is characterized by rugged basin and range topography and the highest elevations in Idaho. The zone contains high levels of earthquake activity and at least 6 major active faults crisscross the region (see figure 2). (hra096)
|Figure 2: Saturday's earthquake near Challis as well as the Borah Peak quake from 1983 occurred in Idaho's Central Seismic Zone. (Map: Idaho Geological Service)|
Josh Bloom's prototype EEW device. (Click to view larger.)
More often than not earthquakes are associated with destruction, losses and general malaise rather than with new and exciting business opportunities. Sure, there are the makers of equipment to measure ground motion who make money, and seismic retrofitting of older structures can be a costly effort. Over the last decade UC Berkeley alone spent more than two billion dollars to make its campus more resilient against the inevitable earthquake shaking. But neither the retrofit nor the production of seismometers has the glamorous image of today's high tech and smart technology. This, however, is about to change as early warning before earthquake shaking is slowly rolled out in California, along the entire US West Coast and elsewhere.
Last week, leading California politicians, among them Lieutenant Governor Gavin Newsom, shared the stage at the Third International Conference on Earthquake Early Warning (EEW) held on the UC Berkeley campus. Under the glaring spotlights of TV crews, those politicians offered their strong support for the EEW demonstration system put together over the last few years under the name "Shake Alert" by a consortium of government agencies and universities, among them the Berkeley Seismological Laboratory (BSL). (see blog from October 2, 2012). While the elected officials and Shake Alert scientists discussed possible sources for the 120 million dollars necessary to fully build and run the West Coast system over the next five years, engineers at established companies, entrepreneurs at start-ups and tinkerers in their garages or living rooms were already busy designing high tech gadgets and apps which can take the EEW alerts to new levels.
Take for instance AtHoc, a 15 year old company in San Mateo, Calif., which specializes in crisis communication. Based on their experience with the EEW system in Mexico (see blog April 19, 2014), AtHoc engineers developed a technique, by which warning messages from Shake Alert are routed through the transmitters of the NOAA weather radio system. With broadcast receivers designed by AtHoc, early warning alerts can be received anywhere, independent of internet and cell phone communication.
Another example is Joshua Bloom, a professor in the Astronomy Department of UC Berkeley. As member of the advisory board for the BSL, he became a beta tester for receiving real-time Shake Alert messages on his laptop through the internet. But as a Berkeley resident, living close to the Hayward Fault, Bloom took the idea further. Instead of relying on his laptop to show the message, he built a wireless receiver and connected it to speakers loud enough to alert all rooms in his house. He envisions that his current prototype home alert can at one point become as ubiquitous - and mandated - as smoke detectors. Bloom describes his device in detail on his blog.
Ingrid Johanson, a researcher at the BSL, is working along similar lines. Based on an Arduino platform, she designed a demonstration system which automatically responds to a Shake Alert message by turning off appliances or machinery so that they are not damaged during an earthquake. She also showed how flashing LED lights or other indicators that don't rely on being in front of a laptop could extend the reach of early warning messages.
If you are tinkering with or are developing an add-on to Shake Alert and other EEW systems, let the blogger know. We are planning to create a developer forum at the BSL to help you integrate your ideas with the science of Earthquake Early Warning (hra095).
Finite fault model for M6.0 South Napa earthquake. (Click to view larger.)
Two days after the early morning magnitude 6.0 earthquake shook the Wine Country and beyond, it has become clear that the City of Napa bore the brunt of the damage. More than 70 buildings in the heart of the town were red tagged, declaring them too damaged for further occupancy. In addition, more than four fifths of the 250 people injured during the quake lived in and around Napa.
There are many factors which determine the extent and location of damage during a temblor. First and foremost on the list is the type and the condition of a building. Structures built from unreinforced masonry or adobe blocks are much more prone to damage than sturdy wooden buildings or highrises constructed with well reinforced concrete on rubber isolators. Unfortunately many buildings in downtown Napa - although well maintained - were of the older masonary type. A second factor is the soil below a structure. In general buildings founded on bedrock fare much better in an earthquake than houses built on unconsolidated sediments. Again, much of Napa is built on the loose sediments of the Napa River. While they provide a wonderful terroir for vinticulture, they are a less than perfect building ground.
Thirdly, the earthquake itself can determine the zones of most severe damage. On the one hand the destructive shear waves are not radiated from the hypocenter in an omnidirectional pattern. Instead they are emitted in the shape of a cloverleaf with the strongest lobes along and perpendicular to the fault. Napa lies exactly in the axis - or "along the strike" as seismologists call it - of the West Napa Fault, the assumed origin of Sunday's quake. But there is also another factor during the earthquake which can determine the zones of damage, that is the direction of the rupture itself.
Using the detailed recordings of eight high quality seismometer stations, Doug Dreger from the Berkeley Seismological Laboratory put together a model for the fault rupture of Sunday's quake. His color coded "fault model" (see figure) is a two-dimensional view of the fault plane. The color purple means no movement during the rupture at all, blue means some rupture and the redder the color gets, the more the two flanks of the fault slip past each other (see scale at bottom of the figure).
The red dot in the middle is the actual hypocenter at a depth of about 7 miles. From there the rupture moved left, in this case north-north-west, and up with three distinct zones where the slip exceeded more than three feet. These zones are represented by the yellow-brown areas in the plot. In total the rupture extends for approximately 7.5 miles from the epicenter on a trajectory that points directly towards the town of Napa. The effect of this directionality is comparable to the punch of a boxer: If the movement of his fist is targeted directly towards you, his punch hurts much more than if the blow would hit you only at a glancing angle or missed you completely. (hra094)