Storm surge pdf download






















Corps of Engineers. The observational record of tropical cyclones in the North Atlantic basin is too short CE present to accurately assess long-term trends of low-frequency variability in storm activity. Using these long-term synthetic tropical cyclone data sets, I investigate the relationship between power dissipation and ocean temperature metrics, as well as the relationship between basin-wide and landfalling tropical cyclone count statistics over the past millennium. Contrary to previous studies, I find only a very weak relationship between power dissipation and main development region sea surface temperature in the Atlantic basin.

Consistent with previous studies, I find that basin-wide and landfalling tropical cyclone counts are significantly correlated with one another, lending further support for the use of paleohurricane landfall records to infer long-term basin-wide tropical cyclone trends.

Additionally, I investigate the changing risk of inundation to the United States Atlantic coast, dependent upon both storm surges during tropical cyclones, and the rising sea levels on which those surges occur. Focusing our study on New York City, I compare pre-anthropogenic era CE CE and anthropogenic era CE CE storm-surge model results, exposing links between increased rates of sea-level rise and storm flood heights. Pacific Coast. Unlike the JPM methodology, where uncertainty is individually quantified based on its source, the uncertainty in surge hazard studies for areas influenced only by XCs is often computed using resampling methods.

The mean curve and confidence limit curves where calculated at 23 water level gages. The uncertainty from the bootstrap includes an aleatory component related to the selected sample and an epistemic component related to the best estimate parameters of the distribution. Analysis based on simulated responses developed from high-fidelity meteorological and hydrodynamic modeling of historical data should exhibit epistemic uncertainties similar to those of TCs due to the similarity in the meteorological and hydrodynamic modeling approaches.

For areas affected by both TCs and XCs, separate probabilistic analyses are performed and the results combined assuming independence between the two populations. The SRR is a measure of the frequency with which a particular area is expected to be affected by TCs. The Capture Zone approach consists of counting the number of hurricanes that make landfall along an idealized segment of coast line or pass through a predefined area.

The second approach is the Gaussian Kernel Function GKF Chouinard and Liu in which each storm is assigned a weight that decreases as its distance to the point of analysis increases. The distance-adjusted weights are based on a Gaussian PDF with an optimal kernel size. Since the methods for recording TCs have not been homogeneous throughout the time period for which there is formally documented data, the selection of the period of record for the analysis will represent a significant source of uncertainty.

This period coincides with the start of hurricane-tracking, reconnaissance-aircraft data collection missions. The databases include the historic storm data, quality controlled though yearly post-season analyses of recent cyclones, as well as re-analyses of past hurricane seasons that are performed to correct biases and random errors in the historic data Landsea et al.

Prior to , the principal sources of data were land stations and ship reports. The amount of data was subject to coincidental encounters with the systems, and its quality was affected by the limitations of the instrumentation.

Storm positions were only recorded once or twice a day, making it necessary to interpolate the intermediate positions. Even though wind speeds were recorded for each position, changes in the instrumentation and its height and location introduced some inaccuracies to the data set Jarvinen et al. After , when aircraft reconnaissance was initiated, an increase in track information over the ocean was observed, and pressure data were included.

Weather satellites were introduced during the s Toro and were considered one of the biggest advances to the tracking of TCs Jarvinen et al. The combined use of satellites and aircraft reconnaissance provided significant improvements to the database. Differences in the frequency, instruments, and methods of data measurement and analysis, throughout the history of storm tracking, introduced errors and systematic biases to the database. A reanalysis effort has been instituted by the NHC to address these issues.

The reanalyses are performed according to the most recent analysis techniques by gathering all available meteorological data and are based on current understanding of TCs Delgado The entire process has been completed for the period of — Landsea et al. The best track data in HURDAT2 include the maximum sustained surface winds knots , the central pressure millibars , position latitude and longitude , and wind radii nautical miles.

The precision for these parameters are 5 knots, 1 millibar, 0. One millilbar is equal to Pascal or 1 hectopascal hPa. Best track maximum sustained surface winds and position data are provided at 6 hr intervals synoptic times , starting at hours, for cyclones recorded after Best track central pressure values are provided at the same time interval starting in while the recording of wind radii data starts in Uncertainty associated with the data in HURDAT has always been recognized, and efforts have been made towards its quantification.

Uncertainty estimates of the parameters have been provided as part of the reanalysis project for the different eras Landsea et al. Additional efforts can be attributed to the independent and separate work of Torn and Snyder and Landsea and Franklin Torn and Snyder estimated the uncertainty for position and intensity for the period of — They first verified the satellite observations using reconnaissance data and then derived new estimates by comparing the ones provided by the NHC against the verified dataset.

In separate work, Landsea and Franklin estimated the uncertainties on each parameter intensity, central pressure, position, and size for the same period of time. They used data from three different sources satellite-only, satellite and aircraft, U.

The results are shown in Table 1. At the end, the results of the uncertainty for position and intensity were very similar between the two different approaches. To the knowledge of the authors, the location uncertainty associated with HURDAT2 has not been accounted for in the computation of storm recurrence rate in joint probability analysis surge studies. It is complex to determine the amount of uncertainty that can be attributed to the measurement of pressure data versus the fitting of the marginal distributions.

Table 1. Landfalling 15 In the line- crossing method, the coastline is smoothed, and all hurricanes that make landfall are counted.

All line-crossing storms are given uniform weight while all other storms are given a weight of zero since they are not considered. The number of storms is divided by the length of the line to calculate the rate. It has been noted Resio et al. The capture zone must be big enough to encompass a sufficient number of storms to be able to compute parameter statistics sample error but small enough to ensure the selected storms belong to the same population population error.

Related to the selection of the capture zone is the fact that the procedure does not differentiate storms within a capture zone since all are assigned the same weight. In contrast, storms outside of the capture zone are given a weight of zero, making the resulting storm rate significantly sensitive to the sometimes subjective process of establishing zone limits.

In contrast with the capture zone methods, this method considers the historical data over a wide area by assigning weights to the hurricanes based on the distance to the study site. The distance-adjusted weights are computed using a Gaussian PDF with an optimal kernel size. The GKF method proposed by Chouinard included a cross-validation, least-square procedure to objectively determine the optimal kernel size. In this procedure, the historic data are randomly separated into two samples with complementary probability.

The cross-validated square error is computed, and the minimum corresponds to the optimal kernel size. The method was effective for evaluating the GK. The procedure was repeated for other geographical locations in the gulf for verification purposes, and the optimal kernel size could be clearly identified from the graphs. Resio et al. This section will describe the probability distributions that have been used to fit storm parameters in recent surge studies.

The FEMA Operating Guidance FEMA recommends the application of either one of these two distributions but stresses that the most important consideration is the quality of fit of the data. It recognizes that another distribution may be used if its shape is more consistent with the observed empirical distribution. The small sample size that typically characterizes the tropical cyclone climatology that impacts a coastal location exacerbates the uncertainty associated with the distribution parameters.

Bootstrap resampling methods have been used in previous studies to recalculate the values of the distribution parameters FEMA FEMA recommends its mapping partners assume a correlation and ensure that sufficient data are analyzed to capture it.

Consideration of the negative correlation would help limit the creation of unrealistic synthetic storms with extreme intensity that also have extreme large radii.

For the Mississippi Coastal Analysis Program FEMA , the lognormal distribution derived for the high-intensity storms did not fit the low-intensity storm data, and the data had to be refitted. The heading direction data for high-intensity storms were fitted to a Beta distribution.

For the low-intensity storm data, the normal distribution resulted in a better fit. A study performed by Toro in support of the Mississippi Coastal Analysis Project found that translational speed was well approximated by the lognormal distribution with a mean of In this study the translational speed was treated as independent from other characteristics. It is one of the main attributes that differentiates the different JPM methods from one another. The CDF was divided into class intervals based on percentile.

Example summary sheets provided in the document show that the central pressure deficit distribution was divided into intervals of 1, 5, 15, 30, 50, 70, and 90 percentiles, forward speed into 5, 20, 40, 60, 80, and 95 percentiles, and heading and radius to maximum winds into 5, The uniform discretization used on these studies was not based on percentiles but in uniform intervals of the parameter space.

Toro et al. The first step involves a general discretization of the central pressure deficit into three intervals corresponding to hurricane categories 3, 4, and 5 on the Saffir-Simpson hurricane wind scale SSHWS. The joint probability distributions within each interval are discretized using BQ, optimizing the values of the hurricane parameters and determining their associated probability. A source of uncertainty in the BQ is that it has a subjective component in the specification of correlation distance associated with the hurricane parameters.

The correlation distance refers to a property of the autocovariance function of the joint PDF of the parameters represented as a Gaussian random process with mean zero. Sampling nodes are spread more evenly and are closer matched to the marginal distribution in the direction in which the PDF of the parameters has closer correlation distances. FEMA provides guidance on additional verification methods such as using a parametric model, comparison of statistical moments of the original distributions to those calculated by the BQ discretization, and assessment of the surge CDF curve at various locations for the occurrence of large jumps.

The BQ method was used for the discretization of the Rmax and Vt marginal distributions. This approach was considered more appropriate given the regional nature of the study. For a given region, tracks are constructed based on specified landfall locations or alternative reference locations and heading directions e. The number of tracks also depend on the study domain size and the spacing between tracks.

A track spacing of 0. The numerical hydrodynamic modeling of surge requires information on the variation of the hurricane parameters along the track. The resulting synthetic storm parameters in the JPM-OS approach are determined in reference to a coastal reference point, so the variation of these parameters along the track needs to be modeled.

Several relationships between storm location and geography have been identified that inform the application of variations to the synthetic storms along their tracks.

Since the center of the hurricane is a low-pressure zone, the term filling of central pressure is used to describe the increase in the central pressure. Ho et al. Translational speed has been found to be primarily related to latitude, increasing as latitude increases. Landfall is considered to be the point where the center of low pressure crosses the coastline, which is often idealized in the JPM studies.

This weakening was characterized by an increase in central pressure and radius to maximum winds and a decrease in the Holland B parameter. The phenomenon was not observed for other U. This pre-landfall weakening was applied starting km 90 nmi away from the coast to synthetic storms with radius to maximum winds larger than 19 km 10 nmi in the Louisiana Resio et al. The decay of the central pressure deficit was modeled with a linear equation that is dependent on the location and the change in central pressure over the km.

Storm heading and forward speed were assumed constant over the last km prior to landfall. Vickery and Twisdale developed a generic pressure deficit scaling model using the pressure deficit history of four strong hurricanes that impacted the study area. The filling-rate exponential decay model applied after landfall was developed using central pressure and position data from HURDAT and adapted for three regions: the Gulf Coast, the Florida peninsula, and the Atlantic Coast.

Later, the model was revisited using updated data and expanded to include the Mid-Atlantic and New England coasts. The treatment of uncertainty in the filling rate model was considered in the calculation of the filling constant, which used the mean values associated with the characteristics of the storm at the time of landfall plus a normally distributed error term with zero mean and standard deviation equal to the error Vickery Synthetic storms were divided into two groups depending on whether they made landfall or were bypassing.

Linear fits were applied to the ratio of pressure deficit offshore to the pressure deficit at landfall as a function of distance to landfall in order to estimate the pre- landing filling rate. Holland B and Rmax variations were computed using functions developed by Vickery and Wadhera , which are dependent on central pressure. A similar analysis was done for bypassing storms by establishing three crossing-point latitudes for each region of study. Linear fits were calculated separately for each region and then combined to determine the applied filling rate.

In general, uncertainty quantification for pre-landfall central pressure filling rates was not presented in the referenced studies. The determination of filling rate consisted of a linear fitting of the historical data.

This was identified as a potential area for further evaluation. In the case of post landfall filling, the most widely used model formally considers uncertainty. Cardone and Cox identified the main approaches as 1 parametric models; 2 steady-state dynamical models; 3 non- steady dynamical models and; 4 kinematic analysis. Steady-state dynamical models based on the Thompson and Cardone PBL model have been widely applied in post-Katrina storm surge studies to estimate wind and pressure field time histories produced by the parameterized synthetic TCs.

In general, the PBL model solves the storm wind and pressure fields by means of numerical integration of the equations of motion of the boundary layer taking into account the physics of a moving vortex Cardone and Cox The model is dynamic as it is solved along the storm track, taking into account the variations of the storm parameters. Various aspects that pertain to model initiation and model calibration that can contribute to model uncertainty have been identified in the literature.

It can be seen that the radial pressure profile depends on the Holland B parameter. Any uncertainty associated with the estimation of this parameter will be transferred to the profile.

An important step in the calibration of the PBL model is the comparison of the resulting winds outside the inner core with wind measured at buoys and winds measured by aircraft reduced from flight level to an elevation of 10 m. This elevation is the reference height used by surge models.

The calibration process consists of varying the input parameters until the resulting wind fields match the best available winds, which consist of the most appropriate measured wind data as evaluated by experienced modelers. Cardone and Cox observed that the PBL model does not require arbitrary calibration constants to a particular region or a type of storm and concluded that the interaction of the tropical cyclone with its environment could be accounted for by appropriate specification of the input parameters.

The study quantified the model error of the combination of the model and forcing for the study area as a standard deviation between 0. It further quantified that the errors associated with the use of PBL winds increased the standard deviation to 0. The study attributed these values to the varying accuracy of the high water marks to which the model results were compared. The study found that for all storms the standard deviation of the wind speed difference was 1.

Best wind data can be derived from reanalyses based on historical climate data. The first step consists of generating the wind and pressure fields for the developed synthetic storms. The offshore wave estimates are subsequently calculated with the an efficient regional spectral wave transformation model like WAve prediction Model WAM model, which provides the wave energy spectra for each storm along the offshore boundary of the nearshore wave model.

Older studies 10 or more years ago used loose coupling while recent studies used fully coupled surge and wave models, like CSTORM. The result of this process is the simulation of wind fields, water surface variations, waves, and nearshore currents that are used as input to probabilistic hazard response models that may compute, for example, wave runup and overtopping on a levee or wave and flow forces on a wall. Applying these forcing conditions, the two- dimensional 2D , depth-integrated ADCIRC model has proven to accurately predict tidal- and wind-driven water-surface levels.

Clair storm wave and water level study Hesser et al. The model represents the three-dimensional 3D equations of motion for simulating tidal circulation and storm-surge propagation over large computational domains. ADCIRC is a finite-element model that allows for high resolution in particular areas of interest study areas or areas with complex shoreline or bathymetric features. Larger elements can be used in open-ocean regions where less resolution is needed. The model provides accurate and efficient computations over a range of time periods days to years.

The formulation assumes that the water is incompressible, hydrostatic pressure conditions exist, and that the Boussinesq approximation is valid.

The ADCIRC model can be forced with time-varying, water-surface elevations, wind shear stresses, atmospheric pressure gradients, wave radiation stresses, river inflow, and the Coriolis acceleration effect. The selection of input parameters like wind drag model and bottom friction values have a significant effect on the results and on the model stability, and they vary from study to study even for similar bottom and wind characteristics suggesting considerable uncertainty.

Sources of error for water level modeling and validation have been identified in previous studies where ADCIRC has been employed Hanson et al. The model solves the 2D shallow water equations using a finite difference scheme on a polar grid.

The SLOSH model domain includes open-coast shorelines, bays, rivers, bridges, roads, levees, and other physical landscape features. SLOSH can also model the astronomic tide and variations in the initial water level.

The SLOSH model coverage is subdivided into 32 regions or basins, which are updated every 5—1o years. Though the SLOSH model is considered computationally efficient, this efficiency is achieved by imposing several limitations on the model physics, namely by neglecting the non-linear advection terms, wave interactions, and river inflows.

In addition, the coarse resolution in the SLOSH model grids does not capture local landscape features, which results in overly smoothed model results Resio and Westerink, The SLOSH model friction is internally parameterized with a depth- dependent linear Ekman-based formulation and land use and vegetation type are not considered in the friction formulation. These friction limitations in the SLOSH model can lead to over-damping of physical phenomena such as the strong geostrophic setup observed during Hurricane Ike.

In addition, bottom friction and other model settings can have a significant effect on the model results and are highly uncertain. Jelesnianski et al. These models will be discussed in the next sections. This means the 2D wave spectrum can develop to a limiting frequency without constraints on the spectral shape.

This modeling approach also allows for model improvements at the elementary level of the source term parameterizations. Previous first- and second-generation wave models generally required implementation-specific tuning of model parameters to improve model results Tolman and Chlikov Development of the WAM model involves an international team of scientists whose efforts have resulted in refinements and improvements to wave modeling techniques over the last 35 years.

Some of the modeling enhancements include the ability to simulate two-way coupling between wind and waves, wave data assimilation, and the medium-range operational forecasting capability.

As of , the official release version of WAM is Cycle 4. The skill of the WAM model needs to be quantified as part of an analysis by evaluating differences between model results and measurements. The evaluation can be based on Quartile-Quartile graphics or statistical tests such as bias, root-mean-square-error, regression, correlation, and scatter index, performed at measurement sites Cialone et al.

Organization of model development from multiple sources is maintained by using a subversion server housed at NCEP. The model also includes rudimentary surf zone source terms and wetting and drying of model grid points. The global domain of that system has an approximately 50 km resolution, with nested regional domains for the northern hemisphere oceanic basins with approximately 18 km and 7 km resolution.

The model includes wave refraction, nonlinear resonant interactions, sub-grid representations of unresolved islands, and dynamically updated ice coverage. Prior to , the model was limited to regions outside the surf zone. Offshore wave information obtained from wave buoys or global- or regional-scale wave hindcasts and forecasts is transformed through the nearshore coastal region using these nearshore wave transformation models.

The nearshore wave model STWAVE is a steady-state, finite-differenced, phase-averaged spectral wave model based on the wave action balance equation Smith et al. STWAVE simulates nearshore wave transformation including depth- and current- induced refraction and shoaling, depth- and steepness-induced wave breaking, wind-wave generation and growth, wave-wave interaction, and whitecapping Smith et al. Ini- taken as Our generated bathymetric figure through Shepard interpolation.

Paul Gour Chandra et al. But the problem to generate a pure oscillat- ory response in the BOB region corresponding to the tidal con- 5 Results and discussion stituent with period T lies with adjustment of the exact values of a The program was executed for 80 h from UTC of 26 April and '.

Thus some techniques are needed to be adopted for pre- to UTC of 30 April and the results were displayed for the last cise specification of the values of the constants and in this re- 48 h from UTC of 28 April to UTC of 30 April. Generation of a stable tidal condition by the process along coastal stations Fig. It can be seen from Fig. For the pure surge in the absence of astronomical tide , that the five coastal stations along the Ganges tidal plain exhibit the model was also run from the cold start.

It pertinent to pin- about the same tidal characters, whereas in the other two plains, point out here that the zero initial values will not affect the calcu- amplitudes and phases of the tides are found to be varied. Our lated results as their effect on the results over several hours will simulated time variation of astronomical tides with respect to the almost disappear Zhang et al.

It can be inferred from the figure that our simulated astronomical tides are in a reasonable agreement with the tidal 4. The reason behind the choice of The numerical experimentations were carried out with the the stations is that the accessibility of data is comparatively more cyclone April The reasons behind the choice of the cyclone at those three stations over the other locations.

Also, the investigated in- coastal locations are depicted in the same way as mentioned formation about the cyclone is relatively more. The cyclone was above in Fig. It may be For discussions of our model simulated results, the time a result of surge negative impact of the mangrove forest. Among history of the storm is utmost urgent. A detailed description of the stations included in the Meghna deltaic plain, the resurgent the time history of the storm can be found in Paul et al. The reason behind the happening may be that , a , therefore, a gist of it is presented here for conveni- during surges, waters were migrated through the numerous ence.

The cyclone was first detected as a depression on 23 April rivers situated in this plain Figs 1 and 2 and hence they came from the satellite pictures taken at Space Research and Re- back slowly when resurgent occurred. It was gradually propag- surge with the straight shore line of the plain. The maximum and ated towards the coast gaining energy. At universal time co- minimum peak surge heights were estimated to be 4.

The depression was gradually intensified and tidal plain. Suddenly it changed its direction and Companigonj , whereas along the Chittagong deltaic plain, the started moving easterly towards the estuary of Meghna River.

Fi- peak surge values were found to range from 2. The maximum sustained coast were found to vary between 2. Observed track of the cyclonic storm April after Paul et al. The numbers in the left such as Companigonj , which agree fairly well with the results obtained the idea of interaction phenomena, our computed water levels in Paul and Ismail a, , Paul et al. Figure 10 shows simulated total water levels obtained ted with observed water levels during the mentioned period of from the study with the nonlinear interaction of tide and surge displaying results in Fig.

From Fig. We chose the same stations for a reason men- simulation vary between 2. However, it can be perceived from Fig. Flather , in his investigation, also found the computed results due to the interaction of tide and surge com- highest total water level 7. It simulated peak total water level at Noakhali coast 6.

Computed time variation of astronomical tide with respect to the MSL for the chosen period of displaying results in different coastal stations along the Ganges tidal plain a , the Meghna deltaic plain b , and the Chittagong straight plain c. In each case, a solid curve represents the configuration for our computed tidal levels, and a circle represents a tidal data at that moment obtained from the BIWTA. Computed time variation of pure surge levels in the absence of astronomical tide with respect to the MSL at different coastal stations; along the Ganges tidal plain a , the Meghna deltaic plain b , and the Chittagong straight plain c.

Computed time variation water levels with respect to the MSL due to the dynamical interaction of tide and surge associated with the storm April at seventeen coastal stations along the Ganges tidal plain a , the Meghna deltaic plain b , and the Chittagong straight plain c. We have almost used all the observed data set obtained in the mentioned period. But, found to be produced along the Meghna estuarine region due to in actuality, tide is a continuous process in the sea and it will al- the interaction effect Figs 11 and Thus the interaction effect ways interact with tide nonlinearly.

Our model simulated results are also presented with the res- Therefore, it is crucial to understand and incorporate the tide ults obtained by the FDM Paul and Ismail, a and 3PCD and surge interaction along the coast of Bangladesh for the pre- MOLs in coordination with the RK 4,4 method Paul et al. It and are presented v in Table 5. To test this effect, as with the RK 4,4 method and the results obtained in some vari- in Paul et al.

Storm surge is the rise in seawater level caused solely by a storm. Storm tide is the total observed seawater level during a storm, which is the combination of storm surge and normal high tide.

Download this graphic. Please find below my reasoning:. In addition, various past studies have extensively discussed similar characteristics. However, in the Methods-section, they say that they also include TCs that have undergone extratropical transition. These systems can no longer be considered tropical by nature rather, extratropical , hereby having different characteristics than TCs and they should thus be excluded from the analysis.

Nowhere in the introduction is there any mention of the TC characteristics that will be under consideration in this manuscript. Please add this description.



0コメント

  • 1000 / 1000