|Home||Archived February 20, 2019||(i)|
The 2010 Chilean Tsunami and Uncertainty in Tsunami Modeling
A magnitude 8.8 earthquake—the fifth-largest instrumentally recorded earthquake in history—struck off the central coast of Chile at 3:34 a.m. local time on February 27, 2010, causing deaths and widespread damage (see Magnitude 8.8 - Offshore Maule, Chile). In addition to the deadly shaking, the earthquake triggered a tsunami that devastated several villages on the Chilean coast. Together, the earthquake and tsunami killed nearly 500 people in Chile. The tsunami radiated to shores throughout the Pacific Ocean basin, where it was widely recorded by tide gauges.
This earthquake and tsunami are not without precedents: the world's largest instrumentally recorded earthquake (magnitude 9.5) occurred just to the south in 1960, triggering a massive tsunami and leading to the loss of approximately 2,000 lives in southern Chile. The tsunami spread across the Pacific Ocean, killing 61 people in Hawai‘i, 138 in Japan, and 32 in the Philippines. The 1960 tsunami had an 11-m maximum runup in Hawai‘i—that is, it reached an elevation on land of 11 m (36 ft) above sea level. In addition, a smaller (M~8.0-8.5) earthquake in 1837, also just south of the 2010 epicenter, resulted in a destructive tsunami with a reported 6-m (20 ft) maximum runup in Hawai‘i.
Soon after the 2010 Chilean earthquake, the Pacific Tsunami Warning Center (part of the National Weather Service of the National Oceanic and Atmospheric Administration [NWS/NOAA]) issued a tsunami warning for the State of Hawai‘i and most of the countries surrounding the Pacific. Fortunately, the 2010 tsunami did not prove to be nearly as destructive on distant shores as past tsunamis. At tide gauges in Hawai‘i, for example, the tsunami's maximum amplitude (height from sea level to crest of wave) was less than 1 m (3 ft); runup figures have not been reported but would likely be comparable to the tide-gauge amplitudes.
For the U.S. west coast and Alaska, the West Coast/Alaska Tsunami Warning Center (also part of NWS/NOAA) issued a tsunami advisory to inform emergency managers and the public that "a tsunami capable of producing strong currents or waves dangerous to persons in or very near the water is imminent or expected." The advisory was quite accurate: strong currents were reported at many harbors, with several instances of broken mooring lines and minor damage reported at harbors in southern California. The West Coast/Alaska Tsunami Warning Center's Web site (http://wcatwc.arh.noaa.gov/chile/chileamp.php) compares forecasted (that is, estimated) tsunami amplitudes at specific tide-gauge stations with observed amplitudes. In Washington, Oregon, and northern California, the forecasted amplitudes were very close to the observed amplitudes. Elsewhere, the forecasted values were greater than the observed values, although in some places, such as Santa Barbara, the observed values were greater. It always needs to be emphasized that even low-amplitude tsunamis are capable of generating strong (and therefore dangerous) currents in harbors over many hours.
Tsunami Models and Uncertainty
The massive amount of tide-gauge data recording the 2010 Chilean tsunami around the Pacific Ocean makes it possible to better understand the sources and types of uncertainty associated with computational models of how tsunamis are generated, how they travel through the open ocean, and what happens when they hit coastlines. (For an introduction to these processes, see "Life of a Tsunami." ) Such models, along with near-real-time tsunami measurements from deep-ocean buoys, are increasingly used to forecast tsunamis soon after an earthquake. (See related Sound Waves articles, "Workshop on Optimizing the DART Network for Tsunami Forecasting" and "Tsunami-Forecasting System Tested by Recent Subduction-Zone Earthquakes.") To improve these models, and thus the accuracy of tsunami forecasts, it is critical that uncertainty analyses be conducted when new data become available.
Various aspects of the waves that make up a tsunami—sometimes referred to as the tsunami wavefield—are easier to model than others. For example, the time it takes for the first tsunami wave to travel from the earthquake source region to any coastal site (termed the "first arrival time") can be modeled with great accuracy because the speed at which a tsunami wave travels depends only on the water depth along its propagation path. The first tsunami wave, however, may not be the most dangerous; the wave with the greatest amplitude can occur hours after the first arrival. Because more factors are involved, tsunami amplitude is more difficult to model than first arrival time. In general, the amplitude of the tsunami scales with the magnitude of the earthquake—the higher the magnitude, the greater the amplitude—but estimating the open-ocean tsunami amplitude with any precision depends on several assumptions, each with varying levels of uncertainty. Much of the total uncertainty in estimating tsunami severity is related to the tsunami-generation process, particularly the location of the earthquake rupture, how much the seafloor is uplifted and downdropped, and how deep the overlying water column is.
Earthquakes with the same magnitude can produce tsunamis of different sizes, depending on the location of the rupture. Shown in the figure below are three possible rupture locations along the interplate thrust fault of a subduction zone, each with the same amount of slip (and therefore the same earthquake magnitude). This fault marks the boundary between tectonic plates, separating the downgoing plate (for example, the Nazca plate at the Chile subduction zone) from the overriding plate (for example, the South American plate). The top panel shows an earthquake rupture beneath a continental shelf. The rocks surrounding the rupture zone quickly deform, resulting in the vertical displacement graphed in the top half of the panel. In this case, most of the vertical displacement occurs offshore and is transferred to the tsunami. In the middle panel, the rupture occurs slightly deeper in the subduction zone, much of it beneath land rather than water. Only a small part of the vertical displacement is transferred to the ocean, and the resulting tsunami is small relative to the magnitude of the earthquake. In the bottom panel, the rupture occurs closer to the oceanic trench and at a shallow depth below the seafloor. Virtually all the vertical displacement caused by the earthquake is transferred to the water above, and because the water is deep at such a site, a relatively large mass of water is displaced. As the resulting tsunami travels into shallow water—at either a nearby or a distant shore—it becomes amplified to a much greater extent than in the other two cases.
The February 27, 2010, Chilean earthquake had aspects of both a continental-shelf tsunami (top panel of illustration) and a coastal tsunami (middle panel) and was therefore of moderate amplitude relative to a magnitude 8.8 earthquake, but scientists could not know this until enough time had elapsed to allow an accumulation of the relevant data. In the minutes following an earthquake, such information as magnitude and epicenter are readily available; however, it is difficult to ascertain the detailed slip pattern along the interplate thrust fault, particularly for very large earthquakes. After sufficient data are recorded at seismic stations around the world, these details gradually emerge. The updated 2010 Chile Finite Fault Model computed by seismologists at the National Earthquake Information Center (NEIC; see map below) shows that the rupture zone extended nearly 500 km (300 mi) along the coast, with large vertical displacements occurring offshore in some areas and just onshore in others. In general, most of the slip (and accompanying vertical displacement) occurred offshore and so was transferred to the tsunami, but primarily in shallow water. Both seismological and water-level data from the tsunami suggest that this was a "typical" magnitude 8.8 interplate thrust earthquake, in terms of where slip occurred and the size of the tsunami that was produced.
Even if tsunami amplitudes in the open ocean are accurately known from data about the tsunami-generation process, additional uncertainty arises when a tsunami arrives at the coast. Tsunami waves reflect and scatter off submerged bathymetric features as they travel toward the coast, and off headlands and other coastline features as they begin to come ashore. Interactions between the tsunami and coastal features generate secondary waves that are "trapped" along coastlines. These waves, called edge waves, propagate parallel to the coastline and themselves can be scattered by shoreline irregularities (see Sound Waves articles "Surprises from the Deadly September 29, 2009, Samoa Tsunami" and "Tsunami-Forecasting System Tested by Recent Subduction-Zone Earthquakes"). In harbors and bays, tsunamis can resonate, setting up a tsunami-induced seiche. Theoretically, all of these waves in the tsunami "coda" (the long-lasting wave activity after the first arrival) can be accurately modeled if the nearshore bathymetry is known at high enough spatial resolution. (See tide-gauge record from San Diego, second figure, this page, for an example of the waves in a tsunami coda.) High-resolution bathymetric maps for tsunami modeling have recently become available for selected sites (http://www.ngdc.noaa.gov/mgg/inundation/), but only low-resolution bathymetric maps exist for many areas vulnerable to tsunamis. Modeling the coastal response of tsunami waves by using low-resolution nearshore bathymetry introduces some uncertainty into estimates of wave height and current velocities. Turbulence in the nearshore regime, either from wave breaking or from seafloor roughness, introduces additional uncertainty.
In assessing and forecasting natural hazards, different sources and types of uncertainty are commonly classified as being either epistemic or aleatory. Epistemic uncertainty, or "knowledge uncertainty," is related to a lack of (or inaccurate) data on which the models are based. The acquisition of additional and more accurate data reduces epistemic uncertainty. Aleatory uncertainty, or "natural uncertainty," is related to the physical process itself and typically is not reduced by the collection of additional data.
Assessments of natural hazards are conducted before disasters strike to help managers plan for them. Tsunami-hazard assessments, for example, identify areas vulnerable to tsunami indundation, and managers use this information to plan evacuation routes and conduct public education. Because such assessments are made before a tsunamigenic earthquake occurs, they encompass a great deal more uncertainty than the real-time tsunami forecasts made immediately after an earthquake. For example, the time when a specific earthquake might occur cannot be predicted, and so the tidal stage during which a tsunami arrives at the coast also cannot be predicted. This uncertainty is treated as aleatory uncertainty. Once an earthquake occurs, this uncertainty is greatly reduced, and travel times and tsunami-coda duration can be predicted with relatively high accuracy. As another example, before an earthquake occurs, the detailed slip pattern of a future rupture cannot be predicted and is also treated as aleatory uncertainty. After an earthquake occurs and sufficient seismic waveform data have become available, the slip pattern can be estimated; in this case, the uncertainty switches from being aleatory to epistemic—it depends on the amount and accuracy of the data used to estimate the slip pattern. In the minutes, hours, and days after the earthquake, as more and different types of data are obtained (including Global Positioning System [GPS], near-field strong-motion, and tsunami-waveform data), epistemic uncertainty is reduced in subsequent analyses, though never completely eliminated.
"Probabilistic" techniques that incorporate both types of uncertainty are increasingly being used in hazard assessments. (See, for example, a pilot study focused on Seaside, Oregon: http://pubs.usgs.gov/of/2006/1234/ and http://pubs.usgs.gov/ds/2006/236/). In the past, tsunami-hazard-assessment models have been primarily "deterministic": they assume an earthquake with a specific set of parameters that lead to a single scenario for the resulting tsunami. Probabilistic models test numerous possible sets of earthquake parameters, generate numerous possible tsunami scenarios, and report their probabilities. Probabilistic techniques have been used for a long time in weather forecasting and are recently being expanded to forecast specific real-time hazards, such as hurricane storm surge (for example, see NOAA's National Hurricane Center Web site at http://www.nws.noaa.gov/mdl/psurge/). Analysis of the tremendous amount of data from the 2010 Chilean tsunami will allow researchers to better quantify uncertainty in tsunami models, with an eye toward possibly developing probabilistic forecasting methods for tsunamis in the future.
To view computer animations of the 2010 Chilean tsunami, visit Preliminary simulations of the 2010 Chilean tsunami from the 27 February 2010 M=8.8 subduction zone earthquake, offshore Chile. For an indepth discussion of observations of tsunamis and their often unexpected behavior, see a recent paper by the author in Advances in Geophysics, 2009, v. 51, p. 107-169, http://dx.doi.org/10.1016/S0065-2687(09)05108-5.
in this issue:
2010 Chilean Tsunami and Uncertainty in Tsunami Modeling
|Home||Archived February 20, 2019|