The importance of volatility
In this article, we will explain the basic concept of volatility, what it is, how it is calculated, implied and historical volatility, and how to model it. I believe you’ve already heard about volatility, so we don’t want to just copy and paste the information you already know. In our articles, we are always trying to go through more exciting stuff for you. Before we dive deeper into volatility, let me quickly explain some basics.
WHAT IS VOLATILITY
Volatility is a risk metric and tells us how much an asset’s price can swing from today’s level, and we will still consider it a standard move. The higher the volatility, the more significant changes in the price we can expect, thus a riskier asset. There are several ways to measure volatility. We will explain a few possible ways to calculate it and also the pros and cons of each one.
Basically, there are two main types: implied volatility and historical volatility.
We will get to them soon. First, we must understand a bit how to look at prices and the market as a whole mathematically.
Mathematical properties of prices
For decades mathematicians and every person who uses some financial software are using one fundamental assumption. This assumption is the log-normality of prices.
First, I will explain what it is, then what is right and what is wrong with that. We mathematicians try to understand the whole world around us by equations and creating theories. Some are more precise, others less, but it helps us a lot. Especially when it comes to the world full of randomness, we need to make the same underlying assumptions, which are mostly based on empirical experience. Even you, as a trader, use this assumption every day without even knowing it – just by using some metrics provided by your broker.
We look at the price of an asset as a random variable. Every random variable has to be from some random distribution, which has some properties. I believed you have heard about normal distribution. Well, it has nice features, and we can use it easily. But why do we assume log-normal distribution for prices? Because log-normal distribution is skewed (drops in price have different behavior than gains), the price of assets is always positive. It is simple and describes the market better than just normal distribution.
When we expect prices to have a log-normal distribution, the calculated logarithmic returns will have a normal distribution, and with that, we can work very efficiently – practical and straightforward with excellent technical properties (if you like math, this article explains it very nicely).
How is volatility calculated?
Let’s look at the distribution of log-returns for SPX from Jan 2000 until Aug 2020. As I said, a normal distribution for log-returns is just an approximation that is not perfect. It is partly solving the problem of asymmetry of returns. In the plot, we can see a histogram with the empirical kernel density function of SPX daily returns (blue) together with probability density function [PDF] of normal distribution (red). We can see that in real distribution there are more observations around zero but also far more extreme observations, which, according to the normal distribution, should not exist. In other words, asset returns have a heavy-tailed skewed distribution. On the plot, we also have confidence intervals constructed from standard deviation (or historical volatility), σ – sigma.
Here we can see a straightforward explanation of volatility, together with confidence intervals. There is a 68.27% probability that the return will be in the green region (mean μ plus-minus one standard deviation σ). The light green range consists of 2 standard deviations and covers 95.45% of all observations. Note we calculated daily volatility (usually you can see annualized or monthly volatility).
Imagine the stock has a monthly volatility of 15%, which means there is a 68% probability that the price will be somewhere in the interval -15% to 15% from an actual value. Or in the range -30% to 30% with a probability of 95%. This approximation is useful but not the correct one. Normal distribution doesn’t have heavy tails. According to a normal distribution based on SPX daily log-returns, the returns more than 6% or less than -6% should not even exist, but we know the opposite. We have a few observations over 10%, but because of the better readability of the graph, they were clipped into the interval (-7%, 7%).
There is a theory holding the Nobel prize, the Black-Scholes-Merton equation for the valuation of the option contracts. This theory uses wrong assumptions, not correct volatility as an input (historical), but the result is very close to reality, and still, it is the most used equation in the financial markets. Nowadays, there are many modifications or approximations, but it is not in the scope of this article. Very simplified: “we use the wrong equation, fill it with the wrong input, but the result is almost right.” How funny and unexpected it is.
Here comes implied volatility. When we input historical volatility, we calculate the theoretical value for the put or call option. Implied volatility is computed by reversing the equation – that means we input the real option’s price and calculate volatility. Usually, for stocks, we calculate implied volatility from ATM options, close to 1-month expiration, respectively weighted average of volatility from several option contracts with different strikes and expiration dates. Note that volatility from option contracts is annualized percentage, but VIX (Volatility Index of S&P 500 calculated by CBOE) represents monthly volatility, resp. 21 trading days.
THE DIFFERENCE BETWEEN IMPLIED VOLATILITY AND HISTORICAL VOLATILITY
What is the main difference between implied volatility and historical volatility?
- Historical volatility only tells the past; it is usually just a standard deviation, which is symmetric (for drops and gains), thus incorrect.
- Implied volatility is calculated from options’ contracts that will expire in the future, so there is a hidden message about what the market thinks about the price, how it could change, how risky the asset will be, or actually how risky the asset is right now.
I don’t have a deeper experience with other brokers, but Interactive Brokers share the implied volatility graph with you, even with some history. If you want to use historical data of implied volatility for all the stocks or other assets for some more in-depth analyses, it could be costly, and most of the time, only daily data is available. Volatility, similarly to prices, is not stationary; it changes over time. In the next section of this article, I will show you some models to approximate the volatility. One of them is pretty accurate compared to implied volatility and could be used for your analyses if you don’t own historical data for implied volatility.
In the picture above, we compare three approximations of volatility with real implied volatility (green line) calculated by CBOE. For a few big US companies, CBOE shares daily historical data on its website. I shortly describe each of them for a better understanding of the plot. I also added the chart of GS [Goldman’s Sachs] for the chosen period.
The red line is the standard deviation, which is a wrong number compared to other values. It behaves oppositely very often, and at the end of 2016, it grew extremely because of fast growth in price. The market never considers quick growth riskier than a sharp drop; this is the asymmetry.
The blue line represents conditional volatility calculated from the family of ARCH models (Autoregressive Conditional Heteroskedasticity). These models are used to measure the change in the variance in time series, so they are handy for our problem. The GJR-GARCH model is specialized for calculating and predicting volatility. The results of this model are quite similar to real implied volatility. We can see that from July 2016, the accuracy worsened, but this could be solved by walk-forwarding the model, for example, every three months (or depends if you use intraday data).
The last model I will present to you is a Bayesian approach. A very advanced methodology, highly over the level of this article. In this case, I calculated a few posterior estimates of volatility, which represent the yellow region. This approach is very natural and also has excellent results, but the computation time is much longer than calculating ARCH models, and also, the stability is a bit lower. If we use some upper percentile like 80% in the time series (upper part of the yellow region), we could get a better approximation of implied volatility.
We have chosen the given period because, during that period, we experienced a few exciting worldwide events. These events affected the market and changed the volatility, respectively, the behavior of the market. On the plot, we have four main events:
- 2015-08-24 – significant one day drop in stock markets, probably caused by a mix of factors: decreasing oil prices, two weeks before it China devalued its currency and showed slowing growth of the economy.
- Jan 2016 – one of the worst January in the US stock exchange – corrections in the market after unprecedented growth from 2012 without significant corrections.
- 2016-06-23 – the UK voted for Brexit in a referendum, and the result was shocking mostly for European markets.
- Nov and Dec 2016 – atmosphere after elections together with Trump’s promises to lower corporate taxes and increase of interest rates by FED (increase is usually considered that economy is doing very well, but depends on the actual situation).
During a given period, we experienced exciting events that occur quite often. Not like a pandemic during 2020, which is a different story. It transformed the market into a huge casino, where, the fastest drop in modern history, was followed by the quickest recovery in the history of US financial markets.
We discussed historical and implied volatility, explained the difference, described important price properties, and the price distribution. The practical aspect of modeling can be for approximating implied volatility if you don’t have historical data. Still, it can also serve as an idea for the strategies – comparing approximated volatility with the real one and generating some trading signals from it.