LECTURE 10 NOTES - EARTHQUAKE HAZARDS (updated 11/15/97)
Instructor: Professor Barbara Romanowicz
Director of Seismological Laboratory
Office Hours: Thursday 2-4 pm , upon appointment only
475 Mc Cone Hall
EARTHQUAKE HAZARDS
No reliable method yet for short term earthquake prediction.
Even if one is found, it is important to develop long term planning, seismic
design and construction in order to minimize the damage and loss from a
large earthquake, whether it was expected or not.
Earthquake Hazard Assessment
Attempt to forecast the likelihood and effects of earthquakes in years to come.
It is subject to all the uncertainties of a predictive science.
There is no cookbook method of seismic hazard evaluation in which we can
have complete confidence.
Earthquake hazard assessment is based on the assumption that earthquake
processes of the recent past will be the same (or similar) to those of the near
future. Valuable information comes from:
1) historical record
earthquake catalogs
instrumental seismographic record (only last 80-100 years)
Important body of knowledge that pertains directly to what might happen in
the same region in the future. Earthquakes are related to tectonic phenomena
that operate on geological time scales: tThe longer the record, the better its
predictive value.
2) prehistorical record (paleoseismology)
Also very valuable because time span is everywhere greater than historical
record, which makes it more meaningful to extrapolate into the future.
Difference between Hazard and Risk
Hazard = physical phenomenon itself which underlies the danger
ground shaking
fault movement
liquefaction
also secondary hazards: tsunamis, landslides (sometimes cause greater
loss of property and life than seismic shaking itself):
N China 1920 (M 8.7) massive landslides in loess killed 100,000.
Peru 1970 (Mw=7.9) Massive debris avalanches killed 15,000 in one city
Alaska 1964 (Mw 9.2) tsunami inundation much greater loss than shaking
Risk = likelihood of human and property loss that can result from hazard
e.g. structural damage from ground motion
If no life or property at stake then the risk is 0 no matter how great the hazard:
1989 Macquarie Ridge (M 8.3) south of New Zealand: no losses
1994 Bolivia (M 8.2) deep earthquake: no losses
1960 Morocco , Agadir (M 5.5) killed more than 12,000
comparison Loma Prieta (M6.9) 1989 and Armenia 1987 (M7.1).
Hazard evaluation: requires contributions from geologists and geophysicists
Risk evaluation and mitigation: engineers, planners, public officials
In any case hazard understanding is needed before risk evaluation can be
made.
Hazards of faults
Buildings in fault zones are exposed to very high earthquake risk.
No measures can help it:
nor the most earthquake resistant bracing and building materials nor latest
principles of reinforcement can guarantee that any property astride a fault
would survive a moderate earthquake without severe damage
San Fernando 1971
30% of houses within the fault zone posted unsafe
5% of houses adjacent to fault zone
80% houses in fault zone suffered moderate or worse damage
30% immediately beyond fault zone ³ ³²
Greatest Hazards: ground surface ruptures
ground shift of several inches may be sufficient to cause severe damage to
buildings
several feet in large earthquake could demolish a well-engineered
building.
severe earthquake vibrations in fault zone
intensity of ground shaking is usually very strong along the fault (even
without ground rupture).
Effects: foundations cracked and thrust apart. Vertical supports collapse or
knocked askew, floors and roofs sag or fall: It is important to know where the
faults are.
The most dangerous fault zones in the western US are well known by
geologists and soil/structural engineers and mapped.
pressures from landowners, developers, ignorant government officials ,
outmoded zoning loaws and public ignorance plus short memory
---> buildings continue to be constructed, bought and sold in fault zones.
Many hundreds can be found straddling obvious evidence of recent faulting:
It is a misconception that contemporary building codes take the danger of
faults and faulting into consideration.
Except for California, where building codes recognize the high risk of
earthquakes and set minimal standards of design and construction , there is
no adequate recognition of the existence of special geological conditions in
fault zone.
Hayward Fault: example of past neglect of the special problem of fault zones
1) there were large events in the 19th century
2) currently: creep and microseismic activity
Ever present risk of large tremor.
There is over a page of a list of schools, important buildings, places of public
assembly , which are located directly above or near the fault.
In many cases, even careful reinforcement would not suffice and there are
many homes, businesses built with less knowledge and care.
How far is a given location from a fault?
e.g Santa Rosa 19 miles of SAF
Downtown San Francisco 9 miles from SAF
Oakland 15miles from SAF, traversed by Hayward Fault
Is a given fault active or inactive?
San Fernando earthquake in 1971: Occurred on a fault well mapped on
detailed geologic maps and then largely ignored as inactive.
Northridge: on previously unmapped fault?
It is safe to assume that if property is not on a known active fault it may be
near an unknown fault which can cause a strong local earthquake ---> make
earthquake resistant reinforcement everwhere in California.
Active/inactive faults:
Assuming that faults are either active or inactive is naive. Faults have all
degrees of activity:
San Andreas F.
N. Anatolian Fault (turkey) Large earthquakes every 100 years
Dixie Valley Fault (Nevada) Large earthquakes every few 1000 years
some faults highly active but produce only small earthquakes
It is more useful to specify the degree of activity:
example ³Holocene active² (last 10,000 years).
US Nuclear Regulatory Commission:
notion of ³capable fault²: fault which has undergone surface rupture once in
35,000 years or more than once in 500,000 years.
State of California: legislation pertaining to building of structures near fault
lines has notion of ³active fault²: has had Holocene surface displacement.
Geologic realisty is more complex than such simple definitions.
Majority of damage occurs along the major faults:
In California: San Andreas, Hayward, Newport-Inglewood are well known
and well mapped faults.
we can expect a repetition of the 1857 Fort Tejon eq.
1906 San Francisco
1872 Owens Valley
or large earthquake some other large fault in California (eg Hayward fault).
Finding out about the safety of a given location:
1- locate on geologic map showing local faults (active and inactive)
2- determine the history of the nearest faults: frequency, magnitude, intensity
patterns, surface ruptures of past earthquakes.
Some surface ruptures are restricted to narrow zone immediately adjacent to
fault line. Some faults, on the other hand, tend to fracture ground surface
over a broad area: several hundreds of meters.
What is the likelihood of ³new² faulting?
Is it possible to have a new fault through previously unbroken rock?
Looking at historical record of earthquakes: with few exceptions every major
historic earthquake associated with surface rupture has occurred on a fault
with a demonstrable history of prior fault displacement (quaternary).
How is a new fault ever born?
Faults originate as small fractures that gradually lengthen during successive
earthquake or strain episodes.
Incremental lenghtening has been observed: 1968 Borrego Mtn earthquake in
S. Calif.: rupture on Coyote Creek fault, extends several hundred meters
farther north through previously unbroken tertiary strata.
Hayward Fault
Donıt buy a house on top of it.
Most engineers and seismologists believe that the H.F. could produce the
most destructive earthquake in the Bay Area.
o runs through most densly populated and oldest cities in the East Bay
o highest probabilities based on history, of causing the next 7+
earthquake:
28% northern Hayward fault
23% southern Hayward fault
22% Rodgers Creek fault (in the next 30 years)
when combined: odds are greater than 1 in 2.
+ risk on SAF: 23%
total risk in Bay Area: 2 in 3 that 2 earthquakes may strike in the Bay Area in
the next 30 years.
Past History:
1836 M6.8-7 one of the largest events in the Bay Area
fissures from San Pablo to Mission San Jose (Fremont)
Latest studies indicate that this earthquake may not have actually
happened, that it was actually a mistaken record of the 1838 San Andreas fault
event
Oct 21, 1868 M6.8-7 20 miles: Fremont to Mills College(Oakland)
Horizontal displacement 3 feet (actually up to 2 m)
every building in village of Hayward damaged or
destroyed
numerous structures in SF damaged (particularly in filled
areas)
Northern Hayward Fault has not ruptured for at least 160 yrs and perhaps as
much as 220 yrs
Smaller earhthquakes (M ~5):
Oct 7, 1915 centered near Piedmont felt to Santa Clara
May 16, 1933 Fremont (all chimneys thrown down)
March 8, 1937 Most recent damaging earthquake (Berkeley, Albany, El Cerrito)
At least 6 Hayward Fault earhtuqkes in the last 2100 years as documented by
trenching on the southern Hayward Fault.
The creep problem:
Long term fault slip rates ~10mm/yr, half of which is active creep. At least 1m
of slip potential has ucrrently accumulated (since last earthquake in 1868).
This could mean a M>6.5 in the near future.
The Hayward fault has been assigned the highest probabiity for a destructive
earthquake in the Bay Area in the next 30 years, with an estimated recurrence
interval of 167 years (Working Group for California Earthquake Prediction,
1990). Estimated cost: $10 billion, and several thousands of deaths.
The uncertainty in earthquake potential on the Hayward Fault is however
high, due to creep, which has been going on steadily at 4-9 mm/yr since 1868.
There is a 5km long stretch on the southern Hayward Fault with 9mm/yr
creep.
After 1989 Loma Prieta earthquake, creep slowd down on the southern
Hayward Fault which indicates that creep is sensitive to regional static stress
field and varies in response to local events. Lack of knowledge about the
extent of locked and creeping portions results in great uncertainty in the
magnitude and time of occurrence of a possible large earthquake.
How far is far enough?
Not simple. Distance is not always the most important factor.
Geologic foundation also plays an important role: certain soil foundations
intensify the shaking, also certain soils are prone to settlement.
e.g. San Francisco 1906 intensity map.
hazards of geology:
1906: people on top of hills in city not even awakened by earthquakes.
Numerous unreinforced masonry buildings survide the earthquake without
sever damage. Landfill regions, oin the other hand: people thrown out of bed,
unable to get ont heir feet for 40-60 sec. Many buildings collapsed.
1989: same effects (although epicenter 60 miles from downtown SF, and only
15 sec of strong motion):
Pacific Heights , no damage
Marina: Old landfill collapses, severe damage: 5 times larger ground motion
Cypress Freeway collapse: soft soils.
Intensity of vibrations increases as waves enter a thick layer of soft soil.
cf jelly in a bowl: amplification of motions. Waves transformed from rapid
small amplitude vibrations in bedrock to slower more damaging large
amplitude waves.
greatest hazards are on alluvial soils: man made landfills, valleys, near coast,
bay.
Hills and mountains: bedrock plus thin soil surface: safer even if hillside is
steep, except in landslide area
Loma Prieta g max =0.64 g at Coralitos near Santa Cruz
pacific Heights: 0.06 g
Telegraph Hill: 0.08 g
Marina: 0.25-0.35 g
SF Int. Airport: 0.33 g (filled land)
San Bruno, firm ground: )0.14g
Mexico City 1985 M 8.1 200 miles away old lake bed : central city 0.25 g. rock
sites: 0.05 g.
Kobe 1995 earthquake (Mw 6.9)
Occurred in port city of Kobe. 5,100 dead. 30,000 injured and 300,000 homeless
in a city of 1.4 million.
Right lateral slip on Nojima fault. Maximum offset 1.5m, length of faulting
50 km, northern part of rupture beneath the city
Surprise: Nojima fault had not been explicitely marked as an imminent
threat(only 2 historical earhtquakes, 868, 1916).
Japanıs worst earthquake disaster since the great 1923 Kwanto earthquake
because the strongest shaking was squeezed in 3km wide plain between
mountains and Osaka Bay with many structures on soils and harbor fill.
Peak horizontal acceleration of 0.8g.
Heavy clay tile roofs on vertical posts and horizontal beams, little X bracing or
plywood sheathing to resist horizontal shaking.
Newly built ductile frame high rise buildings not damaged in general, but
many buildings built in the 1950-60ıs predating critical building code
improvements in 1971 and 1981 destroyed or damaged.
Soil liquefaction
Very common effect in low-lying coastal areas or wherever soft soils and
shallow water tables coexist.
Earthquake vibrations cause compaction of soil --> water flows upward --->
sandy/muddy soils become like quicksand
e.g. Niigata Japan 1964 (M7.3) sea level city 35 miles from epicenter.
Area filled for 1915 Panama Exposition, then built-up in 1920-30 before
modern soil compaction techniques were developed.
Now: Daly City after 1989: 35 feet deep compacted fill. Remains to be seen
what happens in larger event.
Products of hazard assessment
There are many different such products
map of highly active fault
map of historical seismicity
plus other parameters -----> seismic zoning maps
delineate different degrees of hazard
generally in probabilistic terms: likelihood of
earthquake occurrences, or probability of exceeding given levels of ground
shaking.
the most thorough evaluations have been done in connection with specific
critical structures: major dams, nuclear power plans.
~10M $ spent on a single facility, + generates much controversy.
Deterministic/probabilistic hazard assessment
Deterministic: specifies a particular earthquake or level of ground
shaking that is to be considered by the user:
magnitude, location or peak ground acceleration
does not specify how liekly the event might be, only ³credible²
Worst case scenarios: eg M 7.4 on NS strike-slip fault within 15 km of
considered site, causing 0.4g peak acceleration
probabilistic: numerical probability assigned to earthquake occurrences and
their effects during a specified time period . Probability charts of different
magnitude earthquakes. Recognizes uncertainties in our knowledge of
individual input parameters such as fault slip rates, earthquake magnitude as
well as variability in nature (not all ea. of same magnitude cause same
intensity of shaking at same distance).
continuum of outputs
eg. 50% probability of exceeding 0.3g during next 30 years
5% probability of exceeding 0.6g during next 30 years
could include contributions of a number of nearby faults.
Leaves more choice to user on the appropriate level of hazard or risk to be
considered:
a much more ³unlikely² earthquake will be considered for a very critical
structure, such as a nuclear power plant, than for a parking garage.
Concept of acceptable risk:
How unlikely does an earthquake have to be before it is no longer worthy of
consideration for a particular project?
Nuclear power plan: neglect eq. once in 10,000 years
garage: live with the 100 years ea. because consequences of failure are so much
less.
To design critical structures for the maximum earthquake that is physically
possible: is not realistic goal.
How safe is safe enough?
Social question which requires acceptance of responsibility and wide societal
participation.
Deterministic method: does not require data on time dependent processes,
such as rate of eq. occurrence or fault slip rate; does not require to face the
decision of the appropriate level of acceptable risk
Probabilistic method: details of analysis are more systematic. Uncertainties are
specifically identified and quantified.
Deterministic assessments
Maximum credible earthquake
Maximum earthquake capable of occurring in a given area or given fault
during current tectonic regime.
Estimation of maximum earthquake
These are based on information obtained from:
1- Historical and instrumental record (earthquake catalogs)
2- Physical parameters of individual seismogenic faults (length and
segmentation)
3 - Paleoseismic evidence: maximum magnitude of pre-historical events from
the measurement of displacement-per-event (trenching)
Measurement of length:
Since seismic moment is proportional to length (width of fault, for largest
earthquakes on vertical faults can be considered to be fixed to the thickness of
the brittle zone ~15-20 km).
Relate length to moment -> relate moment to magnitude Mw
Great earthquakes have great lengths:
- Chile 1960 Mw=9.5 L~1000km
- entire earthıs circumference (40000km) would generate an earthquake of Mw
10.6: this means there is a practical limit to the maximum size of an
earthquake anywhere on the earth.
In practice:
estimate the length of fault, compare with previously observed ruptures to
get the proportionality factor with seismic moment, then estimate the
moment .
While it is relatively simple to measure the surface rupture length of past
earthquakes, it is much more difficult to estimate ahead of time, the length
associated with the maximum earthquake possible on a given fault. In
particular, one is faced with the question:
How many segments can break during a single earthquake?
The length cannot much exceed the length of the preexisting fault, but
Landers 1992 has shown that ³en echelon² fault segments can break during
the same earthquakes.
Hence, when assessing hazards, one often assigns probabilities to different
scenarios:
for example: 10% probability that 1 segment breaks
50% 2 segments break simultaneously
30% 3
10% more than 3
Probabilistic assessment
Based on:
1) seismicity
2) geologic data (fault slip rates)
3) geodetic data (presently: GPS) rates of deformation
Seismicity
Gutenberg and Richter (1930ıs) developed empirical law, valid worldwide and
now bearing their name:
Log N = a - b M ³frequency-magnitude² relation
N= number of earthquakes per year per unit area (per 1000 km2 for example)
M= magnitude
a and b are constants in a given area, giving:
a = level of seismicity
b = ratio of small to large earthquakes
It is notable that everywhere b is very close to 1. b is referred to as the ³b-
value²
The Gutenberg-Richter (GR) law allows to extrapolate frequency estimates to
magnitudes not included in the dataset. For example, in a given region, one
would measure M 3-6 earthqaukes over several decades, and then extrapolate
to obtain the frequency of M 8 events to answer the question: how often do M
8 earthquakes occur?
There are pitfalls:
1 - In practice there is a high end cut-off in magnitude, and we donıt know a
priori where it is ( we know M>9.5 earthquakes donıt occur but that is not
sufficient)
2 - The extrapolation assumes that the data are significant over long time
periods, it ignores the existence of fluctuations in seismicity.
GR is useful if the data sample is LONG , and the area of consideration is
LARGE, but has to be taken with caution for small areas and short records.
Example of probabilistic hazard assessment based on GR law:
Step 1: identify the faults around a given location
Step 2: assign to each fault a recurrence curve based on seismicity (GR)
Step 3: construct attenuation curves to estimate peak accelerations as a
function of magnitude and distance
Step 4: combine 2 and 3 and obtain a curve giving the probability of exceeding
a given peak acceleration during a specific time period.
Difference with deterministic analysis:
- smaller events may have influence on hazard assessment (not only the
maximum earthquakes)
- no account is taken of when the last major earthquake occurred
Concept of ³characteristic earthquake²
For individual faults, GR approach does not work very well, and some other
procedure is needed.
From experience, we know that individual faults tend to produce repeated
maximum earthquakes of a specific size, whereas smaller earthquakes do not
occur as systematically as would be suggested by a simple GR recurrence
curve.
For example at Parkfield (CA)
- 5 earthquakes of M6 occurred in the same location in the last 150 years
- according to this, there should have been 50 events of magnitude 5 in the
same time period: this was not observed (much fewer M5 events in reality).
Therefore we cannot always use the smaller events to predict the recurrence
rates of larger events ---> we need to use geologic data to find the mean
recurrence intervals between large earthquakes on a specific fault. From that
infer the probability that such an event might occur in the near future.
However, rarely can we identify and date enough earthquakes over enough
time to provide a valid long term recurrence interval.
Primary input:
- long term slip rate (V)
Has to be measured over a long enough time to encompass several
characteristic earthquakes.
- infer recurrence interval (Tr):
Tr = D/V where D is the mean displacement per event
Several important assumptions:
1 - Characteristic earthquakes exist
2- D can be inferred from the magnitude of the characteristic earthquake (this
assumes that there is a single valued relation between the displacement in a
char. eq. and magnitude)
3- All surface slip is represented by characteristic earthquakes (smaller ones
donıt contribute significantly)
4- earthquake recur periodically when some given strain levels are reached --
>³time-dependent model²
Example:
Determination of the 30 year probability of a M>7 earthquake in the San
Francisco Bay Area
Input:
1 - Choice of fault segments (6 chosen including Rodgers Creek Fault,
Northern Hayward Fault, Southern Hayward Fault, Peninsula strand of the
SAF, SAF north of the Bay, SAF south of the Bay (where the Loma Prieta eq.
hit).
2 - Estimate slip rate on each segment
3 - Estimate expected magnitude of characteristic earthquake on each segment
4 - estimate time of last characteristic earthquake on each segment
5- add uncertainties on 1,2,3.
From that we can obtain a ³probability density function² which gives the
probability of recurrence in a time interval DT, given an elapsed time Te since
the last characteristic earthquake.
If the mean recurrence time was known exactly, we would get a spike: 100%
probability at that time, 0% at other times
If the system has no memory and is completely random, we would get a
horizontal line: each time is equally probable.
For the Bay Area, the current estimates are 67% probability in the next 30
years if all segments taken together, and between 23-30% on individual
segments (except SAF north and south of peninsula).
Problems with this approach:
- Recent paleoseismic studies have shown that in many cases earthquake
have not occurred periodically, or even quasi-periodically but have occurred
in clusters.
- Some theoretical models predict that earthquakes occur irregularly in a
completely non periodic way
- Blind thrusts:
These features are recognized as significant sources of hazard in many places
in the world: Taiwan, Iran , India, Algeria and California.
In California:
M 6.3 Coalinga 1983
M 5.8 Whittier Narrows 1987
M 6.7 Northridge 1994
- It is not possible to directly examine buried faults that do not have
surface expression
- blind thrusts have low dip angles: seismicity is not very helpful in
identifying them, because of large uncertainties in depths of small
earthquakes.
To study the hazards of blind thrusts, we need to use:
1) geologic information (Balanced cross-sections based on surface folding and
rock densities)
2) Geodesy: GPS gives accurate convergence rates, for example, recently
determined convergence rate in the LA Basin is 8mm/yr, whereas no large
earthquake happened there in the last 150 years.
3) geomorphic evidence: deformation of late quaternary stream terraces or
coastal terraces
e.g.. 1987: Whittier Narrows eq occurred on a blind thrust identified in 1927 as
active anticline based on warped geomorphic surfaces
On the other hand, the absence of deformation for 80,000 yrs in marine
terraces around the planned site for the Diablo Canyon Nuclear Power Plant
led to the conclusion that blind thrusts were not the dominant hazards in
that location.
In most cases where there is not much information, scientists use the
³floating earthquake hypothesis² which assumes equal likelihood for a
reasonable (conservative) size earthquake with appropriate mechanism
(thrust) to occur anywhere within the suspected area.
A case study: The Cascadia Subduction Zone
No instrumentally recorded or historical large subduction zone earthquake
But has been recognized as a potential site for a major earthquake, perhaps as
large as the Alaska 1964 one.
Evidence:
-recent motion: 4cm/yr convergence measured geodetically
Cascade Range compressed, and Washington coast uplifted
-subduction is active: active volcanoes (e.g. Mt StHelens erupted in 1980)
-specific evidence for recent motion:
cycles of uplift and subsidence up to 2m at least 6 times in the last 7000 years,
and most recently 300 years ago---> could be due to earthquakes
(this from trenching and uncovering successive layers of peat (uplift-living
organisms die) and mud (subsidence- sand deposits from sedimentation and
perhaps tsunamis)**
-evidence from ³paleo tsunami² for several large earthquakes in the past 2000
years.
consequence: the entire slab could slip in a mega thrust earthquake (M 9) or
several smaller earthquakes (M 7) that would rupture limited segments of the
subduction zone at a time.
** a recent example of such direct observation is the aftermath of the April 25,
1992 thrust earthquake near Petrolia, CA (M 6.9) and 2 damaging aftershocks
(up to 1.8g horizontal acceleration recorded):
In the tidal zone several days after the earthquake, a terrible stench occurred
along the seashore: along over 100 km, the shore had been uplifted by about 1
m relative to high tide --> marine life had died and started to decompose.