Mean-Variance Portfolio optimization
Portfolio optimization is one of the fundamental topics for assets management, as old as quantitative finance itself.
In these articles, I would like to present to you some portfolio optimization methodologies. We will go through some classical but still up-to-date methods, the new approaches with machine learning, and finally, we will construct our own.
We will not go too deep into the theory, but I will provide sufficient information for each methodology with relevant references. These methodologies are used daily by thousands of professionals in the financial industry.
Why portfolio optimization?
Why do you, as a retail trader, should bother yourself about portfolio optimization? Isn’t this topic just for the funds? The answer is yes, and no. I believe you have heard about the long bias on the stock markets (see our article on Earnings & Long Bias).
It is hard to beat the market for the last 10 years when the average annual return for index S&P 500 was 11.8% (2010-2019) and 17.4% for NASDAQ 100. Even when you have an excellent trading strategy, it is good to have some long exposition on the market. Long term investment in the stock market had never failed (in general, not in particular companies). Yes, the easiest way to invest is to buy an ETF of NASDAQ 100 or S&P 500 and hold it. If you are an investor from Europe and cannot be recognized as a professional, you can forget about US ETFs thanks to EU legislation.
What will we cover in these articles?
1. Mean-variance optimization (based on Markowitz Model and its extensions, this article)
- Minimum Volatility
- Maximum Return
- Maximum Sharpe Ratio
- Max Return + Min Volatility
- Black-Litterman Allocation (bayesian approach)
- VaR and CVaR optimization
- Stochastic programming (multistage portfolio optimization)
- Machine learning (mostly extensive research by Marcos Lopez de Prado)
- Hierarchical Risk Parity
- Nested Clustering Optimization
3. In the last article (3/3), I share a full python code of walk-forward optimization and the approach’s whole thinking process.
The reality behind portfolio optimization
Portfolio optimization is necessary for deciding what amount of capital to invest in a given asset when choosing from some universe. The result of the process is the weight for each stock or other asset.
Note that you can also use portfolio optimization for different strategies when you’re deciding what portion of capital to invest in which strategy. Optimization is working with expectations, so the resulting portfolios are theoretically the ones with the highest expected returns, resp. the lowest expected risk, highest Sharpe ratio, and so on.
Unfortunately, the reality is not that simple. Portfolio optimization methods work with historical data only and make expectations based on history. Never forget that past performance is not a guarantee of future performance.
The biggest problem from data scientists’ view – portfolio optimization is just an overfit on training data; there is no out-of-sample check inside the portfolio optimization. There isn’t overfitting prevention like validation set to stop fitting. But does it mean it isn’t useful?
Let’s have a look at the practical results of older and newer methodologies and make conclusions yourself.
import pandas as pd, numpy as np import pypfopt
Old but still gold - Markowitz
Mean-Variance is the basic methodology of modern portfolio theory, developed by Henry Markowitz in 1952. As in the title, we use the mean of the returns (expected return) and the variance, resp. covariance between returns of multiple stocks. The whole theory around portfolio optimization was developed because investors want high returns with low risk. There is plenty of material for this topic. If you are interested in more mathematical theory, look at a paper from Washington University. Markowitz asset allocation model uses this form:
where is a vector of weights for each asset, is a covariance matrix, is a vector of expected returns for all assets, and is a minimum expected return of the portfolio we want to achieve. The result of the objective function is a real number that represents the variance of the portfolio.
The objective function is a quadratic programming problem that is solved easily. The first condition in the subject is that weights must sum to 1 (no leverage); the second condition is for a minimum expected return. The last 2 constraints are needed to have weights in the closed interval (-1,1), resp. (0,1) if we do not want to short.
This simple model minimizes portfolio variance while we have a condition on expected return. There are a few modifications if we want to maximize the return, resp. do both. With PyPortfolioOpt, it is straightforward, minimizing the risk while we want to achieve expected return at least mu_x:
opt = pypfopt.EfficientFrontier(expected_returns, cov_matrix, solver='CVXOPT').efficient_return(mu_x)
Respectively maximizing the return when we don’t want to have portfolio volatility higher than sigma_x:
opt = pypfopt.EfficientFrontier(expected_returns, cov_matrix, solver='CVXOPT').efficient_risk(sigma_x)
Maximizing sharpe ratio
Another important methodology of portfolio optimization uses the Sharpe ratio. For more mathematical details, see a paper from Columbia University. Sharpe ratio is a division of the excess-return on a portfolio divided by portfolio volatility or standard deviation. Excess return is the return cleaned of risk-free interest rate (often, we use zero value for risk-free rate).
with the same symbol meaning, theis a risk-free interest rate (usually the interest rate on a three-month U.S. Treasury bill). The result of the objective function here is a real number, the Sharpe ratio of the portfolio. Note that this optimization problem can be reduced to a convex quadratic programming problem. Again simple application in Python (with zero risk-free interest rate):
opt = pypfopt.EfficientFrontier(expected_returns, cov_matrix, solver='CVXOPT').max_sharpe(0)
Practical Example - Settings
According to the highest median dollar volume, our stock universe will consist of the top 200 traded stocks in the in-sample period. We will divide the data from Jan 2015 until Oct 2020 into in-sample and out-of-sample periods by walk-forward approach.
The out-of-sample window’s length will be one quarter, so we will rebalance our portfolio 4 times a year, while the in-sample will be one year.
We can also make short trades, but because of the long bias in the stock market and holding period of 3 months, it is better not to use them (remember that you have to pay a borrow fee for each overnight holding of a short position).
We will compare:
- Minimizing variance (minimum expected return is set as the maximum of the average 3-month return on in-sample of SPY and QQQ)
- Maximizing return (maximum volatility is set as the maximum of standard deviations of SPY and QQQ on in-sample)
- Maximizing Sharpe Ratio (zero risk-free rate)
- Invest uniformly into top 100 stocks according to the median dollar volume
- Invest into top 15 stocks on in-sample according to Sharpe Ratio
- Benchmarks: SPY and QQQ (ETFs for S&P 500 index and Nasdaq 100 index)
We will use a percentage commission and slippage each time we buy or sell. By our live trading results on less liquid stocks with thousands of executed trades, we usually have:
- Commission around 0.1% on trade (open order + close order), based on prices of Interactive Brokers – 0.005 USD per 1 stock, min. 1 USD per order.
- Slippage of less liquid stocks was around 0.16% per trade, but we will invest only in high-volume stocks, and also we can use orders Market-On-Close (MOC) where broker execute at close prices (usually this order have to be set at least 10 minutes before market close on NASDAQ or 15 minutes on NYSE)
Because of this information, we will use 0 slippages for long-term investing.
We will create theoretical portfolios where we invest according to the weights. Thanks to fractional investing, it is possible to construct these portfolios now.
Fractional investing means we can buy half of a share or ⅓ of a share. The models will give us different weights; we will not invest in the stocks whose weight is less than 1% of the portfolio – because of practical purposes, when you invest 100000 USD, investing less than 1000 will not affect the portfolio. If we have a lot of small investments, it will just increase our costs.
Walk forwarded out-of-sample results
We use daily returns on portfolios as our result set. This out-of-sample is from Jan 2015 until Oct 2020. Costs are included (every 3 months, we rebalance). Let’s have a look at the table first:
As stated in the beginning, it is tough to beat the index. We beat SPY with 3 models according to the annual return, which is an outstanding result. But the maximum drawdown is almost the same and was caused by the same event. Volatility is better only in the model, which minimizes the volatility, so we can see that these models work correctly on out-of-sample.
Beating Nasdaq’s returns (QQQ) is almost impossible. The only maximization of return has beaten it, but not according to Sharpe Ratio and not even drawdown (if you look at the plot, the average return is higher only because of the extreme year 2020).
A problem to beat the index is because the index is weighted by market capitalization. As we know, big technology companies are experiencing massive growth. The more market capitalization those companies have, the higher the weights, so they affect the index more.
The covid-19 pandemic caused the market drop. When an event like that occurs, and you are holding some long-term positions, the best is to sell and wait. I individually had sold the portfolio that I held for 3 years at the end of February when the market dropped around 5%. On April 6, I was already back in the game with new long positions.
Plotting the results
You can see how hard it is to beat the blue (SPY) and orange (QQQ) lines in the plot. Interestingly, the drop at the end of 2018 was recovered by the models much slower than by the market. But the recovery after the 2020 market crash was similarly fast.
Our best candidates, opt_maxSharpe (green) and opt_maxReturn (red), do not look that promising when we see that they didn’t get out of the drawdown in 2019, but the market made new highs.
The problem here can be that the market drop finished precisely when the new year began. The optimization algorithms had chosen the most defensive stocks because of the bottom during the portfolio rebalances, and they didn’t catch up with the upcoming rally.
After seeing these results, you may say that all these beautiful theories are for nothing. Investment funds and hedge-funds use these methodologies with stock investing and also allocating capital into different algo-trading strategies.
With this portfolio optimization, you can create a system of strategies that work very well. Many investors have special requirements of lower volatility, even with the cost of lower returns, so the funds are trying to fulfill it.
Having 9% annually with 16.6% volatility (opt_minVolatility) is an excellent result. But having 11% with 18.5% volatility, the same drawdown, and just by buying an ETF without knowing anything about optimization is way better.
Another option is when you use opt_maxReturn, which has 21% annually: ‘wow, it beats the S&P 500 significantly.’ But it beats the index significantly only 2 years out of 6, and it was the whole 2019 in the drawdown while S&P 500 made new highs. Are you sure it is better?
Because of the different requirements of investors, we cannot blindly say it is not better, or it is better just because of one comparison to indexes during a 5 year period. To be able to make these conclusions, it is necessary to do a longer historical check, use slightly different inputs into the model – mostly expected returns calculated with more methods, respectively randomized returns in a given distribution to look how robust are the results of these methodologies.
This instability problem of the mean-variance approach was a motivation for newer methods, which are presented in the second article.