Portfolio optimizer

View a running version of this notebook. | Download this project.


Portfolio Optimization

Written by Philipp Rudiger
Created: November 12, 2019
Last updated: August 4, 2021

Portfolio Optimization is used for risk-averse investors to construct portfolios to optimize or maximize expected return based on a given level of market risk, emphasizing that risk is an inherent part of higher reward

This notebook:

  1. Runs an example Monte Carlo Simulation for an optimal portfolio with resulting returns
  2. Creates an Efficient Frontier which is used to identify a set of optimal portfolios that offers the highest expected return for a defined level of risk or the lowest risk for a given level of expected return

Monte Carlo simulations are used by analyst to determine the expected value and optimal distribution of a portfolio.

In [1]:
import numpy as np
import pandas as pd
import hvplot.pandas  # noqa
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
/tmp/ipykernel_6329/3026520026.py in <module>
      1 import numpy as np
      2 import pandas as pd
----> 3 import hvplot.pandas  # noqa

ModuleNotFoundError: No module named 'hvplot'
In [ ]:
stocks = pd.read_csv('./stocks.csv', index_col='Date', parse_dates=True)
In [ ]:
stocks.head()
In [ ]:
mean_daily_ret = stocks.pct_change(1).mean()
mean_daily_ret
In [ ]:
stocks.pct_change(1).corr()

Simulating Thousands of Possible Allocations

In [ ]:
stocks.head()
In [ ]:
stock_normed = stocks/stocks.iloc[0]
stock_normed.hvplot()
In [ ]:
stock_daily_ret = stocks.pct_change(1)
stock_daily_ret.head()

Log Returns vs Arithmetic Returns

We will now switch over to using log returns instead of arithmetic returns, for many of our use cases they are almost the same,but most technical analyses require detrending/normalizing the time series and using log returns is a nice way to do that. Log returns are convenient to work with in many of the algorithms we will encounter.

For a full analysis of why we use log returns, check this great article.

In [ ]:
log_ret = np.log(stocks/stocks.shift(1))
log_ret.head()
In [ ]:
log_ret.hvplot.hist(bins=100, subplots=True, width=400, group_label='Ticker', grid=True).cols(2)
In [ ]:
log_ret.describe().transpose()
In [ ]:
log_ret.mean() * 252
In [ ]:
# Compute pairwise covariance of columns
log_ret.cov()
In [ ]:
log_ret.cov()*252 # multiply by days

Single Run for Some Random Allocation

In [ ]:
# Set seed (optional)
np.random.seed(101)

# Stock Columns
print('Stocks')
print(stocks.columns)
print('\n')

# Create Random Weights
print('Creating Random Weights')
weights = np.array(np.random.random(4))
print(weights)
print('\n')

# Rebalance Weights
print('Rebalance to sum to 1.0')
weights = weights / np.sum(weights)
print(weights)
print('\n')

# Expected Return
print('Expected Portfolio Return')
exp_ret = np.sum(log_ret.mean() * weights) *252
print(exp_ret)
print('\n')

# Expected Variance
print('Expected Volatility')
exp_vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
print(exp_vol)
print('\n')

# Sharpe Ratio
SR = exp_ret/exp_vol
print('Sharpe Ratio')
print(SR)

Great! Now we can just run this many times over!

In [ ]:
num_ports = 15000

all_weights = np.zeros((num_ports,len(stocks.columns)))
ret_arr = np.zeros(num_ports)
vol_arr = np.zeros(num_ports)
sharpe_arr = np.zeros(num_ports)

for ind in range(num_ports):

    # Create Random Weights
    weights = np.array(np.random.random(4))

    # Rebalance Weights
    weights = weights / np.sum(weights)
    
    # Save Weights
    all_weights[ind,:] = weights

    # Expected Return
    ret_arr[ind] = np.sum((log_ret.mean() * weights) *252)

    # Expected Variance
    vol_arr[ind] = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))

    # Sharpe Ratio
    sharpe_arr[ind] = ret_arr[ind]/vol_arr[ind]
In [ ]:
sharpe_arr.max()
In [ ]:
sharpe_arr.argmax()
In [ ]:
all_weights[1419,:]
In [ ]:
max_sr_ret = ret_arr[1419]
max_sr_vol = vol_arr[1419]

Plotting the data

In [ ]:
import holoviews as hv
In [ ]:
scatter = hv.Scatter((vol_arr, ret_arr, sharpe_arr), 'Volatility', ['Return', 'Sharpe Ratio'])
max_sharpe = hv.Scatter([(max_sr_vol,max_sr_ret)])

scatter.opts(color='Sharpe Ratio', cmap='plasma', width=600, height=400, colorbar=True, padding=0.1) *\
max_sharpe.opts(color='red', line_color='black', size=10)

Mathematical Optimization

There are much better ways to find good allocation weights than just guess and check! We can use optimization functions to find the ideal weights mathematically!

Functionalize Return and SR operations

In [ ]:
def get_ret_vol_sr(weights):
    """
    Takes in weights, returns array or return,volatility, sharpe ratio
    """
    weights = np.array(weights)
    ret = np.sum(log_ret.mean() * weights) * 252
    vol = np.sqrt(np.dot(weights.T, np.dot(log_ret.cov() * 252, weights)))
    sr = ret/vol
    return np.array([ret,vol,sr])
In [ ]:
from scipy.optimize import minimize

To fully understand all the parameters, check out the scipy.optimize.minimize documentation.

In [ ]:
#help(minimize)

Optimization works as a minimization function, since we actually want to maximize the Sharpe Ratio, we will need to turn it negative so we can minimize the negative sharpe (same as maximizing the postive sharpe)

In [ ]:
def neg_sharpe(weights):
    return  get_ret_vol_sr(weights)[2] * -1
In [ ]:
# Contraints
def check_sum(weights):
    '''
    Returns 0 if sum of weights is 1.0
    '''
    return np.sum(weights) - 1
In [ ]:
# By convention of minimize function it should be a function that returns zero for conditions
cons = ({'type':'eq','fun': check_sum})
In [ ]:
# 0-1 bounds for each weight
bounds = ((0, 1), (0, 1), (0, 1), (0, 1))
In [ ]:
# Initial Guess (equal distribution)
init_guess = [0.25,0.25,0.25,0.25]
In [ ]:
# Sequential Least SQuares Programming (SLSQP).
opt_results = minimize(neg_sharpe,init_guess,method='SLSQP',bounds=bounds,constraints=cons)
In [ ]:
opt_results
In [ ]:
opt_results.x
In [ ]:
get_ret_vol_sr(opt_results.x)

All Optimal Portfolios (Efficient Frontier)

The efficient frontier is the set of optimal portfolios that offers the highest expected return for a defined level of risk or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal, because they do not provide enough return for the level of risk. Portfolios that cluster to the right of the efficient frontier are also sub-optimal, because they have a higher level of risk for the defined rate of return.

In [ ]:
# Our returns go from 0 to somewhere along 0.3
# Create a linspace number of points to calculate x on
frontier_y = np.linspace(0,0.3,100) # Change 100 to a lower number for slower computers!
In [ ]:
def minimize_volatility(weights):
    return  get_ret_vol_sr(weights)[1] 
In [ ]:
frontier_volatility = []

for possible_return in frontier_y:
    # function for return
    cons = ({'type':'eq','fun': check_sum},
            {'type':'eq','fun': lambda w: get_ret_vol_sr(w)[0] - possible_return})
    
    result = minimize(minimize_volatility,init_guess,method='SLSQP',bounds=bounds,constraints=cons)
    
    frontier_volatility.append(result['fun'])
In [ ]:
scatter * hv.Curve((frontier_volatility, frontier_y)).opts(color='green', line_dash='dashed')
This web page was generated from a Jupyter notebook and not all interactivity will work on this website. Right click to download and run locally for full Python-backed interactivity.

View a running version of this notebook. | Download this project.