Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download
Views: 2867
Image: ubuntu2004
Kernel: Python 3 (system-wide)

4. Expected utility and Monte Carlo in Python

Camilo A. Garcia Trillos - 2020


In this notebook,

  • we look at the Monte Carlo method and how to use it to approximate expected utilities or certainty equivalents.

  • we use Python to plot information using matplotlib, including a histogram and a regression


Let us import some packages: math, numpy, matplotlib and scipy

import math import numpy as np import scipy as sp from numpy.random import default_rng # pseudo-random number generator import matplotlib.pyplot as plt # This is an indicator to tell jupyter notebook to show us all plots inline: %matplotlib inline

Expected utility via Monte Carlo

To compute the expected utility of a wealth gamble WW we can use he law of large numbers. Indeed, if E[u(W)]<E[|u(W)|]<\infty, we have 1Ni=1Nu(Wi)E[u(W)] as N, \frac{1}{N} \sum_{i=1}^N u(W_i) \rightarrow \mathbb E[u(W)] \text{ as } N\rightarrow \infty, where (Wi)(W_i) is a family of independent draws of random variables with WiWW_i \sim W for each ii.

The Monte Carlo method relies on this equality to produce an approximation to the expectation (by choosing a large N and calculating the empirical average).

To see how this works, let us start with WW being normally distributed, that is W=σN+μW = \sigma N + \mu, where μ,σR\mu, \sigma \in \mathbb R and NN is standard normally distributed.

Now, let us suppose first we want to compute expected utility of a CARA utility u(x)=1exp(αx)u(x) = 1-\exp(-\alpha * x). We can calculate explicitly E[u(W)]=E[1exp(ασNαμ))]=1exp(αμ+12α2σ2). \mathbb E[u(W)] = \mathbb E[1- \exp(-\alpha \sigma N - \alpha \mu ))] =1- \exp\left(-\alpha \mu + \frac 1 2 \alpha^2 \sigma^2 \right).

We use this value to compare to the value approximated by Monte Carlo as explained before. Let us build a plot of this function in some given domain.

Plotting the exact solution

There are several libraries allowing us to plot in Python. We will use one of the simplest: Matplotlib.

A simple way to plot in this library is to provide it with vectors of input and output. To try it, let us simply plot the result of the (exact) expected utility when the CARA coefficient changes.

We start by sampling the space of coefficients of risk aversion:

x = np.linspace(0.001,3,100) # creates a vector of size 100 with numbers between 0.1 and 30 print(x)
[1.00000000e-03 3.12929293e-02 6.15858586e-02 9.18787879e-02 1.22171717e-01 1.52464646e-01 1.82757576e-01 2.13050505e-01 2.43343434e-01 2.73636364e-01 3.03929293e-01 3.34222222e-01 3.64515152e-01 3.94808081e-01 4.25101010e-01 4.55393939e-01 4.85686869e-01 5.15979798e-01 5.46272727e-01 5.76565657e-01 6.06858586e-01 6.37151515e-01 6.67444444e-01 6.97737374e-01 7.28030303e-01 7.58323232e-01 7.88616162e-01 8.18909091e-01 8.49202020e-01 8.79494949e-01 9.09787879e-01 9.40080808e-01 9.70373737e-01 1.00066667e+00 1.03095960e+00 1.06125253e+00 1.09154545e+00 1.12183838e+00 1.15213131e+00 1.18242424e+00 1.21271717e+00 1.24301010e+00 1.27330303e+00 1.30359596e+00 1.33388889e+00 1.36418182e+00 1.39447475e+00 1.42476768e+00 1.45506061e+00 1.48535354e+00 1.51564646e+00 1.54593939e+00 1.57623232e+00 1.60652525e+00 1.63681818e+00 1.66711111e+00 1.69740404e+00 1.72769697e+00 1.75798990e+00 1.78828283e+00 1.81857576e+00 1.84886869e+00 1.87916162e+00 1.90945455e+00 1.93974747e+00 1.97004040e+00 2.00033333e+00 2.03062626e+00 2.06091919e+00 2.09121212e+00 2.12150505e+00 2.15179798e+00 2.18209091e+00 2.21238384e+00 2.24267677e+00 2.27296970e+00 2.30326263e+00 2.33355556e+00 2.36384848e+00 2.39414141e+00 2.42443434e+00 2.45472727e+00 2.48502020e+00 2.51531313e+00 2.54560606e+00 2.57589899e+00 2.60619192e+00 2.63648485e+00 2.66677778e+00 2.69707071e+00 2.72736364e+00 2.75765657e+00 2.78794949e+00 2.81824242e+00 2.84853535e+00 2.87882828e+00 2.90912121e+00 2.93941414e+00 2.96970707e+00 3.00000000e+00]

We now implement the exact solution expected CARA utility under normal assumptions. Since it is a simple expression, we can use a lambda function as introduced before.

# The operations in expected_u are well defined for vectors as long as mu,sd,x broadcast correctly together. expected_u = lambda mu,sigma,alpha: 1-np.exp(-alpha*mu+0.5*alpha**2*sigma**2)

Note that we use 'np.exp' and not 'math.exp': this is because we want the function to be 'vectorial', that is, to accept vectors as an input

(try changing np.exp for math.exp, run the code and then run the code below... there will be an error).

sd, mu = 2,5 # Equivalently sd=2 and mu=5 y=expected_u(mu,sd,x) # Note that x is a vector print(y) # And so is y
[ 4.98553078e-03 1.43161779e-01 2.59436323e-01 3.57578415e-01 4.40665016e-01 5.11214871e-01 5.71295417e-01 6.22608244e-01 6.66557592e-01 7.04305396e-01 7.36815622e-01 7.64890079e-01 7.89197408e-01 8.10296616e-01 8.28656217e-01 8.44669848e-01 8.58669036e-01 8.70933653e-01 8.81700514e-01 8.91170449e-01 8.99514138e-01 9.06876943e-01 9.13382902e-01 9.19138057e-01 9.24233215e-01 9.28746256e-01 9.32744058e-01 9.36284107e-01 9.39415849e-01 9.42181820e-01 9.44618597e-01 9.46757599e-01 9.48625755e-01 9.50246068e-01 9.51638082e-01 9.52818282e-01 9.53800408e-01 9.54595733e-01 9.55213272e-01 9.55659954e-01 9.55940751e-01 9.56058773e-01 9.56015322e-01 9.55809920e-01 9.55440295e-01 9.54902345e-01 9.54190056e-01 9.53295395e-01 9.52208157e-01 9.50915768e-01 9.49403046e-01 9.47651906e-01 9.45640992e-01 9.43345252e-01 9.40735416e-01 9.37777378e-01 9.34431459e-01 9.30651531e-01 9.26383973e-01 9.21566425e-01 9.16126303e-01 9.09979026e-01 9.03025898e-01 8.95151561e-01 8.86220947e-01 8.76075607e-01 8.64529284e-01 8.51362567e-01 8.36316424e-01 8.19084342e-01 7.99302784e-01 7.76539543e-01 7.50279522e-01 7.19907310e-01 6.84685798e-01 6.43729854e-01 5.95973852e-01 5.40131507e-01 4.74646078e-01 3.97628461e-01 3.06780053e-01 1.99296367e-01 7.17463012e-02 -8.00794903e-02 -2.61359526e-01 -4.78482558e-01 -7.39352719e-01 -1.05377686e+00 -1.43395752e+00 -1.89512203e+00 -2.45632757e+00 -3.14149420e+00 -3.98073414e+00 -5.01206692e+00 -6.28363866e+00 -7.85660183e+00 -9.80886243e+00 -1.22399700e+01 -1.52775168e+01 -1.90855369e+01]

If for some reason you cannot implement directly a vectorial function, it is possible to use a loop or the function np.vectorize to render the function vector ready.

We are ready to make the plot:

plt.plot(x,y,'b--') # Make a plot between x and y plt.title('Expected utility as a function of coefficient of absolute risk aversion - Gaussian case') # Add a title plt.xlabel('Coefficient of risk aversion') # Add a label on the x axis plt.ylabel('Expected utility') # Add a label on the y axis
Text(0, 0.5, 'Expected utility')
Image in a Jupyter notebook

MC implementation

Let us now look at the Monte Carlo approximation of the above function. We start by defining a function that calculates the CARA utility:

cara_utility = lambda x,alpha: 1-np.exp(-alpha*x)
# Some tests on our function assert cara_utility(1,1)== 1-1./math.e , "Failed test with x=1, alpha =1" assert cara_utility(5,2)== 1-math.e**-10., "Failed test with x=5, alpha=2"

We can now generate a sample of wealths, distributed like a N(μ,σ2)\mathcal N (\mu,\sigma^2).

sd, mu = 2,5 # Equivalently sd=2 and mu=5 N = 10000 rng = default_rng() sample_gaussian = rng.normal(mu,sd,N)

How can we check that these are Gaussian? We can plot the histogram of the empirical distribution defined. The package matplotlib has a convenient function for this: plt.hist (recall that plt is our alias for pyplot)

plt.hist(sample_gaussian, density=True) # Plots the histogram, normalising to obtain a pdf. plt.title('Histogram of our sample') # Add a title to the plot plt.xlabel('sample_gaussian') # Add a label on the X axis plt.ylabel('density') # Add a label on the Y axis
Text(0, 0.5, 'density')
Image in a Jupyter notebook

It looks like a good Gaussian sample with our parameters (centred in 5 and with standard deviation 2). In later notebooks we will learn some alternative ways for checking Gaussianity.

We can now calculate a Monte Carlo approximation of our expected utility. Examine the code below:

cara_utility(sample_gaussian,1).mean() # In one line, we evaluate the cara utility for each entry of the sample, and then calculate the mean of the resulting vector
0.9505191425518782

Observe now the following: the estimation is random. To see this, let us run the estimation with another sample

cara_utility(rng.normal(mu,sd,N),1).mean()
0.9512317213956805

As expected, the two values are close but different. Indeed, this estimator is random, because it depends on the sample, which is itself random. This is something to be taken into account when using Monte Carlo estimators.

In fairness, the Python implementation of the MC estimator can only produce a pseudo-random generation. To see this, we can fix the seed of the pseudo-random generation algorithm and compare the answers

rng = default_rng(1234) sample_gaussian = rng.normal(mu,sd,N) mc_eu1 = cara_utility(sample_gaussian,1).mean() rng = default_rng(1234) sample_gaussian2 = rng.normal(mu,sd,N) mc_eu2 = cara_utility(sample_gaussian2,1).mean() print(mc_eu1-mc_eu2)
0.0

Setting the random states allows us to repeat the same sequence on the pseudo-random generation.

Now, let us remind ourselves of the closed form solution:

expected_u(mu,sd,1)
0.950212931632136

We see that the value is very close to the value(s) estimated via MC. Indeed, this error can be explained via the central limit theorem, which give us a control on the L_2 norm and is of the form

E[u(W)]1Ni=1Nu(Wi)L2CN1/2sd(X1)\left \|\mathbb E[u(W)] - \frac 1 N \sum_{i=1}^N u(W_i) \right \|_{L_2} \leq \frac{C}{N^{1/2}} {\rm sd}(X_1)

Let us verify this empirically, using a plot and a regression. We want to retrieve the rate of convergence, which is the power 1/2 in the above control. To do this we use a log-log plot (think why).

n_vec = 2**np.arange(10,20) # The number of MC simulations, we take powers of 2 rng = default_rng(1) # Fix the seed to "1", so that the plot looks the same every time you run it u = np.array([cara_utility(rng.normal(mu,sd,N),1).mean() for N in n_vec]) # Create an MC expected utility for the sizes above error = np.abs(u - expected_u(mu, sd, 1)) # Calculate the error plt.loglog(n_vec, error, 'ro', label='Error') # Make a log log plot plt.title('Error in Monte Carlo Expected utility as a function of size of sample - Gaussian case') plt.xlabel('N') plt.ylabel('Error') # Let us also add a reference line. To do so, we need to calculate a simple regression. We can use the polyfit function m, b = np.polyfit( np.log(n_vec), np.log(error), 1) plt.loglog(n_vec, np.exp(b+m*np.log(n_vec)), 'g--', label='Best fit: Error ='+ "%.2f N^(%.2f)" % (math.exp(b),m)) plt.legend()
<matplotlib.legend.Legend at 0x7faa48cb9e80>
Image in a Jupyter notebook

The hardest line of code in the plot above is possibly

plt.loglog(n_vec, np.exp(b+m*np.log(n_vec)), 'g--', label='Best fit: Error ='+ "%.2f N^(%.2f)" % (math.exp(b),m))

Let us look at two parts in particular:

'g--'

Means make a green dashed line.

while

label='Best fit: Error ='+ "%.2f N^(%.2f)" % (math.exp(b),m))

means: take the value of exp(b), round it to a float with two decimal figures, do the same with m, and write a string that contains exp(b) N^ m with this format. This is saved on a variable label that is used by matplotlib to assign the legends in a plot.

Check that you understand the other lines of code.

Note that the best fit slope is close to -1/2 as expected. This is consistent with the theoretical error given before. Write the equations to be sure you understand why.

Exercise

  1. Compute, via a Monte-Carlo simulation, the expected utility of a CRRA investor for the following gambles.

    • W1aN+bW_1 \sim |aN + b|, where NN is standard normally distributed and a,bRa,b \in R.

    • W2Exp(λ2)W_2 \sim \text{Exp}(\lambda_2) where λ2>0\lambda_2>0.

You might have to look up online the commands for the corresponding random number generators. (Use the ones in numpy.random).

import numpy as np def MC_expected_u1(a,b,n,rho): if rho == 1: crra_u = lambda x:np.log(x) else: crra_u = lambda x,rho:(x**(1-rho))/(1-rho) rng = np.random.default_rng() return (crra_u((np.abs(rng.normal(b,a,n))),rho)).mean() def MC_expected_u2(lamda,n,rho): if rho == 1: crra_u = lambda x:np.log(x) else: crra_u = lambda x,rho:(x**(1-rho))/(1-rho) rng = np.random.default_rng() return (crra_u(rng.exponential(1/lamda,n), rho)).mean()
  1. Write a function that computes the certainty equivalent for a CRRA investor. (Hint: You might have to compute, on a piece of paper, u1u^{-1} for the different relative risk aversions ρ\rho.)

def certainty_equivalent1(a,b,n,rho): if rho == 1: u_inverse = lambda x:np.exp(x) return u_inverse(MC_expected_u1(a, b, n, rho)) else: u_inverse = lambda x,y:((1-y)*x)**(1/(1-y)) return u_inverse(MC_expected_u1(a, b, n, rho),rho) def certainty_equivalent2(lamda,n,rho): if rho == 1: u_inverse = lambda x:np.exp(x) return u_inverse(MC_expected_u2(lamda, n, rho)) else: u_inverse = lambda x,y:((1-y)*x)**(1/(1-y)) return u_inverse(MC_expected_u2(lamda,n,rho),rho)
  1. With a=1a = 1 and b=2b = 2, plot the certainty equivalent of a CRRA investor as a function of relative risk aversion ρ\rho, using gamble W1W_1.

x = np.linspace(-1,3,1000) y = np.array([certainty_equivalent1(1,2,10000,rho) for rho in x]) plt.plot(x,y) plt.show()