A onehundredyear flood is calculated to be the level of flood water expected to be equaled or exceeded every 100 years on average. The 100year flood is more accurately referred to as the 1% flood, since it is a flood that has a 1% chance of being equaled or exceeded in any single year. Based on the expected flood water level, a predicted area of inundation can be mapped out. This floodplain map figures very importantly in building permits, environmental regulations, and flood insurance.
Contents 
A 100year flood has approximately a 63.4% chance of occurring in any 100year period, not a 100 percent chance of occurring. The probability P_{e} that a certainsize flood occurring during any period will exceed the 100yr flood threshold can be calculated using P_{e} = 1 – [1(1/T)]^{n} where T is the return period of a given storm threshold (e.g. 100yr, 50yr, 25yr, and so forth), and n is the number of years. The exceedance probability P_{e} is also described as the natural, inherent, or hydrologic risk of failure^{[1]}; ^{[2]}.
Tenyear floods have a 10% chance of occurring in any given year (P_{e} =0.10); 500year have a 0.2% chance of occurring in any given year (P_{e} =0.002); etc. The percent chance of an Xyear flood occurring in a single year can be calculated by dividing 100 by X.
The field of extreme value theory was created to model rare events such as 100year floods for the purposes of civil engineering. This theory is most commonly applied to the maximum or minimum observed stream flows of a given river. In desert areas where there are only ephemeral washes, this method is applied to the maximum observed rainfall over a given period of time (24hours, 6hours, or 3hours). The extreme value analysis only considers the most extreme event observed in a given year. So, between the large spring runoff and a heavy summer rain storm, whichever resulted in more runoff would be considered the extreme event, while the smaller event would be ignored in the analysis (even though both may have been capable of causing terrible flooding in their own right).
There are a number of assumptions which are made to complete the analysis which determines the 100year flood. First, the extreme events observed in each year, must be independent from yeartoyear. In other words the maximum river flow rate from 1984, can not be found to be significantly correlated with the observed flow rate in 1985. 1985 can not be correlated with 1986, and so forth. The second assumption is that the observed extreme events must come from the same probability distribution function. The third assumption is that the probability of a given storm (rainfall or river flow rate measurement) will only occur once per year. The fourth assumption is that the probability distribution function is stationary, meaning the mean (average), standard deviation and max/min values are not increasing or decreasing over time. This concept is referred to as Stationarity.^{[3]}; ^{[4]}
The first assumption has a very low chance of being valid in all places. Studies have shown that extreme events in certain watersheds in the U.S. are not significantly correlated, but this must be determined on a case by case basis. The second assumption is often valid if the extreme events are observed under similar climate conditions. For example, if the extreme events on record all come from late summer thunder storms (as is the case in the southwest U.S.), or from snow pack melting (as is the case in northcentral U.S.), then this assumption should be valid. If, however, there are some extreme events taken from thunder storms, others from snow pack melting, and others from hurricanes, then this assumption is most likely not valid. The third assumption is only a problem if you are trying to forecast a low, but maximum flow event (say, you are tying to find the max event for the 1year storm event). Since this is not typically a goal in extreme analysis, or in civil engineering design, then the situation rarely presents itself. The final assumption about stationarity has come into question in light of the research being done on climate change. In short, the argument being made is that if temperatures are changing and precipitation cycles are being altered, then there is compelling evidence that the probability distribution is also changing See article in Science Magazine: Stationarity is Dead. The simplest implication of this is that not all of the historical data are, or can be, considered valid as input into the extreme event analysis.
When these assumptions are violated there is an unknown amount of uncertainty introduced into the reported value of what the 100year flood means in terms of rainfall intensity, or river flood depth. When all of the inputs are known the uncertainty can be measured in the form of a confidence interval. For example, one might say there is a 95% chance that the 100year flood is greater than X, but less than Y. Without analyzing the statistical uncertainty of a given 100year flood, scientists and engineers can decrease the uncertainty by using two practical rules. First, forecast an extreme event which is no more than double your observation years (e.g you have 27 observed river measurements, so you can determine a 50year event since 27x2=54, but not a 100yr event). The second way to decrease the uncertainty of the extreme even is to forecast a value which is less than the maximum observed value (e.g. the maximum rainfall event on record is 5.25 inches/hour, so the 100year storm event should be less than this).
The amount, location, and timing of water reaching a drainage channel from natural precipitation and controlled or uncontrolled reservoir releases determines the flow at downstream locations. Some precipitation evaporates, some slowly percolates through soil, some may be temporarily sequestered as snow or ice, and some may produce rapid runoff from surfaces including rock, pavement, roofs, and saturated or frozen ground. The fraction of incident precipitation promptly reaching a drainage channel has been observed from nil for light rain on dry, level ground to as high as 170 percent for warm rain on accumulated snow.^{[5]}
Most precipitation records are based on a measured depth of water received within a fixed time interval. Frequency of a precipitation threshold of interest may be determined from the number of measurements exceeding that threshold value within the total time period for which observations are available. Individual data points are converted to intensity by dividing each measured depth by the period of time between observations. This intensity will be less than the actual peak intensity if the duration of the rainfall event was less than the fixed time interval for which measurements are reported. Convective precipitation events (thunderstorms) tend to produce shorter duration storm events than orographic precipitation. Duration, intensity, and frequency of rainfall events are important to flood prediction. Short duration precipitation is more significant to flooding within small drainage basins.^{[6]}
The most important upslope factor in determining flood magnitude is the land area of the watershed upstream of the area of interest. Rainfall intensity is the second most important factor for watersheds of less than approximately 30 square miles or 80 square kilometers. The main channel slope is the second most important factor for larger watersheds. Channel slope and rainfall intensity become the third most important factors for small and large watersheds, respectively.^{[7]}
Water flowing downhill ultimately encounters downstream conditions slowing movement. The final limitation is often the ocean or a natural or artificial lake. Elevation changes such as tidal fluctuations are significant determinants of coastal and estuarine flooding. Less predictable events like tsunamis and storm surges may also cause elevation changes in large bodies of water. Elevation of flowing water is controlled by the geometry of the flow channel.^{[7]} Flow channel restrictions like bridges and canyons tend to control water elevation above the restriction. The actual control point for any given reach of the drainage may change with changing water elevation, so a closer point may control for lower water levels until a more distant point controls at higher water levels.
Effective flood channel geometry may be changed by growth of vegetation, accumulation of ice or debris, or construction of bridges, buildings, or levees within the flood channel.
Statistical analysis requires all data in a series be gathered under similar conditions. A simple prediction model might be based upon observed flows within a fixed channel geometry.^{[8]} Alternatively, prediction may rely upon assumed channel geometry and runoff patterns using historical precipitation records. The rational method has been used for drainage basins small enough that observed rainfall intensities may be assumed to occur uniformly over the entire basin. Time of Concentration is the time required for runoff from the most distant point of the upstream drainage area to reach the point of the drainage channel controlling flooding of the area of interest. The time of concentration defines the critical duration of peak rainfall for the area of interest.^{[9]} The critical duration of intense rainfall might be only a few minutes for roof and parking lot drainage structures, while cumulative rainfall over several days would be critical for river basins.
Extreme flood events often result from coincidence such as unusually intense, warm rainfall melting heavy snow pack, producing channel obstructions from floating ice, and releasing small impoundments like beaver dams.^{[10]} Coincident events may cause flooding outside the statistical distribution anticipated by simplistic prediction models.^{[11]} Debris modification of channel geometry is common when heavy flows move uprooted woody vegetation and flooddamaged structures and vehicles, including boats and railway equipment.
