In finance, volatility most frequently refers to the standard deviation of the continuously compounded returns of a financial instrument within a specific time horizon. It is used to quantify the risk of the financial instrument over the specified time period. Volatility is normally expressed in annualized terms, and it may either be an absolute number ($5) or a fraction of the mean (5%).
Contents 
Volatility as described here refers to the actual current volatility of a financial instrument for a specified period (for example 30 days or 90 days). It is the volatility of a financial instrument based on historical prices over the specified period with the last observation the most recent price. This phrase is used particularly when it is wished to distinguish between the actual current volatility of an instrument and
For a financial instrument whose price follows a Gaussian random walk, or Wiener process, the width of the distribution increases as time increases. This is because there is an increasing probability that the instrument's price will be farther away from the initial price as time increases. However, rather than increase linearly, the volatility increases with the squareroot of time as time increases, because some fluctuations are expected to cancel each other out, so the most likely deviation after twice the time will not be twice the distance from zero.
Since observed price changes do not follow Gaussian distributions, others such as the Levy Distribution are often used.^{[1]} These can capture attributes such as "fat tails" although their variance remains finite.
When investing directly in a security, volatility is often viewed as a negative in that it represents uncertainty and risk. However, with other investing strategies, volatility is often desirable. For example, if an investor is short on the peaks, and long on the lows of a security, the profit will be greatest when volatility is highest.
In today's markets, it is also possible to trade volatility directly, through the use of derivative securities such as options and variance swaps. See Volatility arbitrage.
Volatility does not measure the direction of price changes, merely how dispersed they are expected to be. This is because when calculating standard deviation (or variance), all differences are squared, so that negative and positive differences are combined into one quantity. Two instruments with different volatilities may have the same expected return, but the instrument with higher volatility will have a larger swings in values at the end of a given period of time.
For example, a lower volatility stock may have an expected (average) return of 7%, with annual volatility of 5%. This would indicate returns from approximately 3% to 17% most of the time (19 times out of 20, or 95%). A higher volatility stock, with the same expected return of 7% but with annual volatility of 20%, would indicate returns from approximately 33% to 47% most of the time (19 times out of 20, or 95%)
Volatility is a poor measure of risk, as explained by Peter Carr, "it is only a good measure of risk if you feel that being rich then being poor is the same as being poor then rich".
Although the Black Scholes equation assumes predictable constant volatility, none of these are observed in real markets, and amongst the models are are Bruno Dupire's Local Volatility, Poisson Process where volatility jumps to new levels with a predictable frequency, and the increasingly popular Heston model of Stochastic Volatility.^{[2]}
It's common knowledge that types of assets experience periods of high and low volatility. That is, during some periods prices go up and down quickly, while during other times they might not seem to move at all.
Periods when prices fall quickly (a crash) are often followed by prices going down even more, or going up by an unusual amount. Also, a time when prices rise quickly (a bubble) may often be followed by prices going up even more, or going down by an unusual amount.
The converse behavior, 'doldrums' can last for a long time as well.
Most typically, extreme movements do not appear 'out of nowhere'; they're presaged by larger movements than usual. This is termed autoregressive conditional heteroskedasticity. Of course, whether such large movements have the same direction, or the opposite, is more difficult to say. And an increase in volatility does not always presage a further increase—the volatility may simply go back down again.
The annualized volatility σ is the standard deviation of the instrument's logarithmic returns in a year.
The generalized volatility σ_{T} for time horizon T in years is expressed as:
Therefore, if the daily logarithmic returns of a stock have a standard deviation of σ_{SD} and the time period of returns is P, the annualized volatility is
A common assumption is that P = 1/252 (there are 252 trading days in any given year). Then, if σ_{SD} = 0.01 the annualized volatility is
The monthly volatility (i.e., T = 1/12 of a year) would be
Note that the formula used to annualize returns is not deterministic, but is an extrapolation valid for a random walk process whose steps have finite variance. Generally, the relation between volatility in different time scales is more complicated, involving the Lévy stability exponent α:
If α = 2 you get the Wiener process scaling relation, but some people believe α < 2 for financial activities such as stocks, indexes and so on. This was discovered by Benoît Mandelbrot, who looked at cotton prices and found that they followed a Lévy alphastable distribution with α&nsbp;= 1.7. (See New Scientist, 19 April, 1997].) Mandelbrot's conclusion is, however, not accepted by mainstream financial econometricians.
Suppose you notice that a market price index which is approximately 10,000, moves about 100 points a day on average. This would constitute a 1% (up or down) daily movement.
To annualize this, you can use the "rule of 16", that is, multiply by 16 to get 16% as the overall (annual) volatility. The rationale for this is that 16 is the square root of 256, which is approximately the number of actual trading days in a year. This uses the statistical result that the standard deviation of the sum of n independent variables (with equal standard deviations) is ∛n times the standard deviation of the individual variables.
It also takes the average magnitude of the observations as an approximation to the standard deviation of the variables. Under the assumption that the variables are normally distributed with mean zero and standard deviation σ, the expected value of the magnitude of the observations is √(2/π)σ = 0.798σ, thus the observed average magnitude of the observations may be taken as a rough approximation to σ.

