Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

LS30 Cookbook

Views: 6812

LS 30B related items

  • dde solve

  • linear algebra

  • iterating a discrete-time matrix model

  • derivatives

dde_solve

dde_solve is a function meant for solving single variable delay differential equations (DDE). This means that the equations have an explicit dependence on state values in the past.

Here's an example of a delay differential equation:

x=3.5x21+x2x(t5)x' = 3.5\frac{x^2}{1+x^2} - x(t-5)

To get started with solving it, first you'll need to run the cell containing the dde_solve function. It will either be in a separate file distributed with Lab 2 or included within the .turnin file itself. (It's also included here as well so that the examples work.)

Below is the documentation for the dde_solve function.

TODO:

  • what the history function represents

Function definition: dde_solve(dde, statevar, delayedvars, history, tmax, timestep)

Description: Numerically integrate a delay differential equation

A single-variable delay differential equation is an equation of the form X'(t) = f(X, Xτ1, Xτ2, ... , Xτn) where on the right hand side X denotes X(t) and each Xτ denotes X(t - τ) for some constant delay τ.

Inputs:

dde: The right hand side of the delay differential equation

statevar: The state variable to use (X in the example above)

delayedvars: A dictionary whose keys are the symbolic variables that appear in the DDE expression, which represent the value of the state variable some time ago, and whose values are the actual delays. For example, if the DDE is X'(t) = X(t) - X(t - 5) + X(t - 17.4) the dde argument might be X - Xtau + Xtau2, in which case this argument would be {Xtau: 5, Xtau2: 17.4}. Note that this means before calling this function, all three of X, Xtau, and Xtau2 should have been declared as symbolic variables: var("X, Xtau, Xtau2")

history: A Sage expression for the history, as a function of t (time)

tmax: Integrate from t = 0 to this time

timestep: The size of each time step (the smaller, the more accurate)

Output: A list of the values of the state variable (X in the example above) at each time step from t = 0 to tmax

Examples:

A version of Mackey-Glass, using a constant 0 history:

var("X, X_t")
sol = dde_solve(1 - X * X_t^5/(1 + X_t^5), X, {X_t: 8}, history=0, tmax=400, timestep=0.1)
list_plot(zip(srange(0, 400, 0.1), sol), plotjoined=True)

The same, but with an oscillating history function, giving weird transients that give way to a periodic steady state:

var("t, X, X_tau")
eq = 1 - X * X_tau^5/(1 + X_tau^5)
sol = dde_solve(eq, X, {X_tau: 8}, 5*cos(t), 400, 0.1)
list_plot(zip(srange(0, 400, 0.1), sol), plotjoined=True)

Click the little arrow on the side to show/hide the code for the dde_solve function!

%auto def dde_solve(dde, statevar, delayedvars, history, tmax, timestep): # Check validity of delays. if min(delayedvars.values()) < 0: raise ValueError("This function will not work with negative delays. " "Consider consulting a fortune teller instead.") if any(val<timestep for val in delayedvars.values()): raise ValueError("Time step should not be larger than delay.") # Set up variables and delays. delayedvars = delayedvars.items() dde = dde.subs({v: statevar for v, delay in delayedvars if delay == 0}) delayedvars = [(v, delay) for v, delay in delayedvars if delay != 0] allvars = [str(statevar)] + [str(v) for v, delay in delayedvars] delays = [delay for v, delay in delayedvars] # Set up fast functions. dde_func = fast_float(dde, *allvars) history_func = fast_float(history, "t") # Adjust the timestep if necessary mindelay = min(delays) if delays else timestep timestepcorrectionfactor = ceil(timestep / mindelay) timestep /= timestepcorrectionfactor # A function to perform history lookups. def lookup(t): """Does a history lookup at each delay from t, stores result in allvars[1:]""" for i, delay in enumerate(delays): if t - delay <= 0: allvars[i+1] = history_func(t - delay) else: r = (t - delay) / timestep n = floor(r) r -= n allvars[i+1] = result[n]*(1 - r) + result[n + 1]*r # Set up for the first iteration. result = [history_func(0)] lookup(0) for t in sxrange(0, tmax - timestep, timestep): # Compute k1. Note history lookup has already been done. allvars[0] = result[-1] k1 = dde_func(*allvars) # Compute k2. lookup(t + timestep/2) allvars[0] += timestep/2 * k1 k2 = dde_func(*allvars) # Compute k3. Note history lookup has already been done. allvars[0] = result[-1] + timestep/2 * k2 k3 = dde_func(*allvars) # Compute k4. lookup(t + timestep) allvars[0] = result[-1] + timestep * k3 k4 = dde_func(*allvars) # Finally, compute the RK4 weighted average. result.append(result[-1] + (k1 + 2*k2 + 2*k3 + k4)/6 * timestep) return result[::timestepcorrectionfactor]
var('old_x') # old x represents the delayed state variable history = .2 # we'll assume that x = .2 for t <= 0 tmax = 100 # time to simulate to dt = .1 # timestep delay_amt = 5 # amount of time delay t = srange(0, tmax, dt) equation = 3.5*(x^2/(1+x^2))-old_x sol = dde_solve(equation, x, {old_x: 5}, history, tmax, dt) list_plot(sol)
old_x

What the history function represents

Since we weren't there to observe what was going on before the simulation started, we have to make some assumptions. The history function is the "initial condition" of the system. A lot of times, we'll assume that it is a constant value just to get things started. After a little while has passed, the history of the system will become more realistic and we'll be able to trust the results a little more.

Linear Algebra

  • vector

  • matrix

  • math operations

  • eigenvalues and eigenvectors

    • RDF format

  • jacobian

  • creating a dynamical system from a matrix

Sagemath has very powerful tools for doing linear algebra. You can define vectors and matrices directly in sagemath and do numeric as well as symbolic operations with them.

Let's start with vectors.

You can define a vector and plot it like so:

a = vector([5,6]) show('a = ',a) plot(a)
a = (5,6)\displaystyle \left(5,\,6\right)

Note that when you display vectors with show, they come out looking like they are row (horizontal) vectors. This tends to be confusing since vectors in LS30 are typically written as column vectors. There may be some cases later on in your math career in which you will have to really distinguish between vectors that are written as rows and vectors written as columns, but for now, just assume that all vectors are column vectors unless otherwise specified.

If you want, you can use the option aspect_ratio=1 to make it so that the axes are properly scaled relative to each other. I also like using the option figsize=4 so that the plots of a single vector don't end up taking up so much space.

plot(a, aspect_ratio=1, color="red", thickness=8, figsize=4)

You can also plot 3 dimensional vectors:

# 3d vector c = vector([1,2,3]) plot(c, figsize=4, axes=False)
3D rendering not yet implemented

Vectors can be added together, multiplied by scalars and multiplyed with each other to yield the dot product. (if you don't know what a dot product is, it's okay) In addition, you can multiply them by matrices, which we'll see in a later section.

# adding and multiplying vectors a = vector([5,6,1]) b = vector([3,2,1]) a+b a-b 5*a a*b # dot product of a and b
(8, 8, 2) (2, 4, 0) (25, 30, 5) 28

We can choose to define matrices using a list of rows using the matrix function. Use the show function to display the matrix in a nice way using LaTeX. Note that a list of lists itself is not a matrix.

mlist = [[1,2,3],[4,5,6],[7,8,9]] Q = matrix(mlist) show("Q = ",Q) print "Q is a {}".format(type(Q)) print "mlist is a {}".format(type(mlist))
Q = (123456789)\displaystyle \left(\begin{array}{rrr} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{array}\right)
Q is a <type 'sage.matrix.matrix_integer_dense.Matrix_integer_dense'> mlist is a <type 'list'>

Matrices can also be defined as a list of columns to the column_matrix function.

mlist = [[1,2,3],[4,5,6],[7,8,9]] M = column_matrix(mlist) show("M = ",M)
M = (147258369)\displaystyle \left(\begin{array}{rrr} 1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9 \end{array}\right)

Many operations are defined for matrices, including + , - and * .

A = matrix([[1,2],[-2,-1]]) B = matrix([[0,1],[-1,1]]) show("A = ",A, ", B =",B) show("A+B = ", A + B) # matrix addition (element by element) show("A-B = ",A - B) # matrix subtraction show("A*B = ", A * B) # matrix multiplication show("A^2 = ", A^2) # matrix exponent
A = (1221)\displaystyle \left(\begin{array}{rr} 1 & 2 \\ -2 & -1 \end{array}\right) , B = (0111)\displaystyle \left(\begin{array}{rr} 0 & 1 \\ -1 & 1 \end{array}\right)
A+B = (1330)\displaystyle \left(\begin{array}{rr} 1 & 3 \\ -3 & 0 \end{array}\right)
A-B = (1112)\displaystyle \left(\begin{array}{rr} 1 & 1 \\ -1 & -2 \end{array}\right)
A*B = (2313)\displaystyle \left(\begin{array}{rr} -2 & 3 \\ 1 & -3 \end{array}\right)
A^2 = (3003)\displaystyle \left(\begin{array}{rr} -3 & 0 \\ 0 & -3 \end{array}\right)

You can multiply matrices by vectors using the * operator. Be careful; the order of the variables matters in this case.

For everything involving matrices multiplied by vectors in LS30, you will be doing something to the effect of

[abcd][ef]=[ae+bfce+df]\begin{bmatrix}a &b\\c &d\end{bmatrix}\begin{bmatrix}e\\f\end{bmatrix} = \begin{bmatrix}ae+bf\\ce+df\end{bmatrix}

which is a matrix multiplied by a column vector. You can do this with the syntax A*v.

For the opposite case, which would be a row vector multiplying a matrix, we would have:

[ef][abcd]=[ae+cfbe+fd]\begin{bmatrix}e&f\end{bmatrix}\begin{bmatrix}a &b\\c &d\end{bmatrix} = \begin{bmatrix}ae+cf &be+fd\end{bmatrix}

Do you see how this is different from the first case?

%var a,b,c,d,e,f A = matrix([[a,b],[c,d]]) v = vector([e,f]) show(A*v, "first case") # matrix x column vector show(v*A, "second case") # row vector x matrix
(ae+bf,ce+df)\displaystyle \left(a e + b f,\,c e + d f\right) first case
(ae+cf,be+df)\displaystyle \left(a e + c f,\,b e + d f\right) second case

You can check the dimensions of a matrix using the dimensions() method. The dimensions method returns a tuple. The first element of the tuple is the number of rows; the second element is the number of columns.

You can access columns of the matrix using the columns() method and rows of the matrix using the rows() method. Remember that sagemath and python both use 0-indexing which is different from the standard matrix indexing notation that starts at 1.

A = matrix([[1,2],[3,4],[5,6]]) show(A) A.dimensions() # 3 rows, 2 columns. A.row(0) # first row of A A.column(1) # second column of A
(123456)\displaystyle \left(\begin{array}{rr} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{array}\right)
(3, 2) (1, 2) (2, 4, 6)

Eigenvectors and Eigenvalues

For all square matrices (matrices whose inputs and outputs have the same dimension), there are often special vector "directions". Vectors pointing in this direction end up pointing in the same direction and scaled by a scalar λ\lambda after being multiplied by A. We can mathematically describe this relationship as

Av=λvAv = \lambda v

We call the vectors eigenvectors and the constant that they are scaled by eigenvalues. This is a defining property of square matrices and one of the most important concepts in linear algebra.

Let's try finding the eigenvectors and eigenvalues for a matrix, using Sagemath.

A = matrix(RDF,[[1,2],[2,1]])

You'll notice that we use the argument RDF as the first argument to the matrix. This stands for "Real Double Field" and indicates to sagemath that we want to create a matrix for doing numerical as opposed to symbolic operations on it. The matrix that sagemath returns will be optimized for operations like finding requiring high numerical precision, like finding eigenvalues and eigenvectors.

vecs = A.eigenvectors_right() v = vecs[0][1][0] show("A = ",A) show("v = ",v) show("A*v = ",A*v) plot(A*v) + plot(v, color="red", aspect_ratio=1, figsize=4, gridlines=True) %md An eigenvector before and after being scaled by the matrix A
A = (1.02.02.01.0)\displaystyle \left(\begin{array}{rr} 1.0 & 2.0 \\ 2.0 & 1.0 \end{array}\right)
v = (0.707106781187,0.707106781187)\displaystyle \left(0.707106781187,\,0.707106781187\right)
A*v = (2.12132034356,2.12132034356)\displaystyle \left(2.12132034356,\,2.12132034356\right)

An eigenvector before and after being scaled by the matrix A

If we take a bit of time to examine the output of A.eigenvalues_right(), we'll notice some potentially confusing output:

evec_data = A.eigenvectors_right() evec_data
[(3.0000000000000004, [(0.7071067811865476, 0.7071067811865474)], 1), (-0.9999999999999996, [(-0.7071067811865474, 0.7071067811865476)], 1)]

Don't be intimidated! Notice that this is a list of tuples (you can think of a tuple as a type of list). There will be a separate tuple for each eigenvalue/eigenvector pair. The first element of the tuple is an eigenvalue of A. The second element of the list is the corresponding eigenvector for that eigenvalue. You don't need to worry about the third element. (You can read the docstring yourself if you're curious with A.eigenvectors_right?) To access individual elements of this list, you'll need to use indexing.

evec_data[0] # data corresponds to the first eval/evec pair evec_data[0][0] # eigenvalue evec_data[0][1][0] # eigenvector (for some reason, sagemath decides to encase the vector within a list) evec_data[0][2] # 1 represents the algebraic multiplicity of the eigenvalue, but this a numerical calculation so this number is always 1.
(3.0000000000000004, [(0.7071067811865476, 0.7071067811865474)], 1) 3.0000000000000004 (0.7071067811865476, 0.7071067811865474) 1

Another thing to note: The method that we are calling is named A.eigenvectors_right(). This corresponds to eigenvectors of a matrix A when multiplied with the vector on the right. These are called right eigenvectors. If a vector is an eigenvector of A when multiplied with the vector on the left, then we call that vector a left eigenvector. LS30 will not cover left eigenvectors.

Iterating a Discrete-Time matrix model

Let's say that we have a discrete-time matrix model and we want to iterate to get a time series. We can use a for loop to iterate it forwards in time. We'll use a 2x2 matrix model for this but this will work for systems with more dimensions as well.

A = matrix(RDF, [[.5, .6], [.25,.9]]) p = vector([10,5]) # initial population is 10 juveniles, 5 adults for i in srange(5): # iterate 5 times print p p = A*p # propagate forwards print p
(10, 5) (8.0, 7.0) (8.2, 8.3) (9.08, 9.52) (10.251999999999999, 10.838) (11.628799999999998, 12.3172)

If you are just interested in a single populations' progress, we can do that easily:

A = matrix(RDF, [[.5, .6], [.25,.9]]) pop_list = [] # create an empty list p = vector([10,5]) for i in srange(5): pop_list.append(p[0]) # store juveniles each time step p = A*p list_plot(pop_list) # plot the list

Here's another fancier example where we record the output and create a time series.

A = matrix(RDF, [[.5, .6], [.25,.9]]) pop_list = [] # create an initial list p = vector([10,5]) for i in srange(5): pop_list.append(p) # append the entire population vector p = A*p populations = zip(*pop_list) # create multiple time series for juveniles and adults # overlay each time series plots = [] for pop, c in zip(populations, rainbow(len(populations))): plots.append(list_plot(pop, plotjoined=True, color=c)) show(sum(plots))
%md This is the output of `zip(*pop_list)`. It separates the time series in pop_list.

This is the output of zip(*pop_list). It separates the time series in pop_list.

zip(*pop_list)
[(10, 8.0, 8.2, 9.08, 10.251999999999999), (5, 7.0, 8.3, 9.52, 10.838)]

Derivatives

diff jacobian

You can use the diff function to take symbolic partial derivatives. The first argument to diff must be a symbolic expression composed of either sagemath built-in functions or symbolic functions that you created yourself. The second argument to diff tells sagemath which variable to differentiate.

show(diff(sin(x), x)) # derivative of sin(x) is cos(x)
cos(x)\displaystyle \cos\left(x\right)
show(diff(x^2 + 6*x + 5,x))
2x+6\displaystyle 2 \, x + 6
show(diff(log(x^2),x))
2x\displaystyle \frac{2}{x}

We can see from the above example that sagemath doesn't forget the chain rule. However, since this is a partial derivative only, you might be surprised to learn that sagemath will not include a dxdt\frac{dx}{dt} for you.

Sagemath handles multiple variables, no problem.

var('x','y') show(diff(e^(x^2 + y^2), y))
(x, y)
2ye(x2+y2)\displaystyle 2 \, y e^{\left(x^{2} + y^{2}\right)}

The Jacobian

The jacobian of a function is essentially its multidimensional partial derivatives arranged in a matrix. Sagemath has a nice function that you can use for this purpose so that you do not have to use diff, aptly named jacobian.

var('x','y') f(x,y) = x^2*y + 5*x^2 + 1*y^2 g(x,y) = y*x + 1/(1+x^2) system = [f,g] jacobian(system, [x,y])
(x, y) [ (x, y) |--> 2*x*y + 10*x (x, y) |--> x^2 + 2*y] [(x, y) |--> y - 2*x/(x^2 + 1)^2 (x, y) |--> x]

Simply put together a list of symbolic functions and specify which variables to take the partial derivatives with respect to. The order that you specify the variables will affect the way that the matrix elements are laid out.

Let's look at the way that we did ours earlier:

system = [f,g] jacobian(system, [x,y])

The output of the jacobian matrix will be

[dfdxdfdydgdxdgdy]\begin{bmatrix}\frac{df}{dx} & \frac{df}{dy}\\\frac{dg}{dx} &\frac{dg}{dy}\end{bmatrix}

Notice how the order of the partial derivatives is xx, then yy.

We can change the order of the columns by switching the order of the symbolic variables:

system = [f,g] jacobian(system, [y,x])

will give us

[dfdydfdxdgdydgdx]\begin{bmatrix}\frac{df}{dy} & \frac{df}{dx}\\\frac{dg}{dy} &\frac{dg}{dx}\end{bmatrix}
system = [f,g] jacobian(system, [y,x])
[ (x, y) |--> x^2 + 2*y (x, y) |--> 2*x*y + 10*x] [ (x, y) |--> x (x, y) |--> y - 2*x/(x^2 + 1)^2]

The nice thing about the jacobian function is that it is easy to use for other things. For example, you can use it as a function and evaluate it at a particular (x,y)(x,y):

J = jacobian(system, [x,y]) J12 = J(x=1, y=2) # J(1,2) also works J12 type(J12) # comes out as a matrix
[ 14 5] [3/2 1] <type 'sage.matrix.matrix_symbolic_dense.Matrix_symbolic_dense'>