LS 30B related items
dde solve
linear algebra
iterating a discrete-time matrix model
derivatives
dde_solve
dde_solve is a function meant for solving single variable delay differential equations (DDE). This means that the equations have an explicit dependence on state values in the past.
Here's an example of a delay differential equation:
To get started with solving it, first you'll need to run the cell containing the dde_solve
function. It will either be in a separate file distributed with Lab 2 or included within the .turnin file itself. (It's also included here as well so that the examples work.)
Below is the documentation for the dde_solve function.
TODO:
what the history function represents
Function definition: dde_solve(dde, statevar, delayedvars, history, tmax, timestep)
Description: Numerically integrate a delay differential equation
A single-variable delay differential equation is an equation of the form X'(t) = f(X, Xτ1, Xτ2, ... , Xτn) where on the right hand side X denotes X(t) and each Xτ denotes X(t - τ) for some constant delay τ.
Inputs:
dde: The right hand side of the delay differential equation
statevar: The state variable to use (X in the example above)
delayedvars: A dictionary whose keys are the symbolic variables that appear in the DDE expression, which represent the value of the state variable some time ago, and whose values are the actual delays. For example, if the DDE is X'(t) = X(t) - X(t - 5) + X(t - 17.4) the dde argument might be X - Xtau + Xtau2, in which case this argument would be {Xtau: 5, Xtau2: 17.4}. Note that this means before calling this function, all three of X, Xtau, and Xtau2 should have been declared as symbolic variables: var("X, Xtau, Xtau2")
history: A Sage expression for the history, as a function of t (time)
tmax: Integrate from t = 0 to this time
timestep: The size of each time step (the smaller, the more accurate)
Output: A list of the values of the state variable (X in the example above) at each time step from t = 0 to tmax
Examples:
A version of Mackey-Glass, using a constant 0 history:
var("X, X_t")
sol = dde_solve(1 - X * X_t^5/(1 + X_t^5), X, {X_t: 8}, history=0, tmax=400, timestep=0.1)
list_plot(zip(srange(0, 400, 0.1), sol), plotjoined=True)
The same, but with an oscillating history function, giving weird transients that give way to a periodic steady state:
var("t, X, X_tau")
eq = 1 - X * X_tau^5/(1 + X_tau^5)
sol = dde_solve(eq, X, {X_tau: 8}, 5*cos(t), 400, 0.1)
list_plot(zip(srange(0, 400, 0.1), sol), plotjoined=True)
Click the little arrow on the side to show/hide the code for the dde_solve function!
What the history function represents
Since we weren't there to observe what was going on before the simulation started, we have to make some assumptions. The history function is the "initial condition" of the system. A lot of times, we'll assume that it is a constant value just to get things started. After a little while has passed, the history of the system will become more realistic and we'll be able to trust the results a little more.
Linear Algebra
vector
matrix
math operations
eigenvalues and eigenvectors
RDF format
jacobian
creating a dynamical system from a matrix
Sagemath has very powerful tools for doing linear algebra. You can define vectors and matrices directly in sagemath and do numeric as well as symbolic operations with them.
Let's start with vectors
.
You can define a vector
and plot it like so:
Note that when you display vectors with show
, they come out looking like they are row (horizontal) vectors. This tends to be confusing since vectors in LS30 are typically written as column vectors. There may be some cases later on in your math career in which you will have to really distinguish between vectors that are written as rows and vectors written as columns, but for now, just assume that all vectors are column vectors unless otherwise specified.
If you want, you can use the option aspect_ratio=1
to make it so that the axes are properly scaled relative to each other. I also like using the option figsize=4
so that the plots of a single vector don't end up taking up so much space.
You can also plot 3 dimensional vectors:
Vectors can be added together, multiplied by scalars and multiplyed with each other to yield the dot product. (if you don't know what a dot product is, it's okay) In addition, you can multiply them by matrices, which we'll see in a later section.
We can choose to define matrices using a list of rows using the matrix
function. Use the show function to display the matrix in a nice way using LaTeX. Note that a list of lists itself is not a matrix.
Matrices can also be defined as a list of columns to the column_matrix
function.
Many operations are defined for matrices, including + , - and * .
You can multiply matrices by vectors using the * operator. Be careful; the order of the variables matters in this case.
For everything involving matrices multiplied by vectors in LS30, you will be doing something to the effect of
which is a matrix multiplied by a column vector. You can do this with the syntax A*v
.
For the opposite case, which would be a row vector multiplying a matrix, we would have:
Do you see how this is different from the first case?
You can check the dimensions of a matrix using the dimensions()
method. The dimensions method returns a tuple. The first element of the tuple is the number of rows; the second element is the number of columns.
You can access columns of the matrix using the columns()
method and rows of the matrix using the rows()
method. Remember that sagemath and python both use 0-indexing which is different from the standard matrix indexing notation that starts at 1.
Eigenvectors and Eigenvalues
For all square matrices (matrices whose inputs and outputs have the same dimension), there are often special vector "directions". Vectors pointing in this direction end up pointing in the same direction and scaled by a scalar after being multiplied by A. We can mathematically describe this relationship as
We call the vectors eigenvectors and the constant that they are scaled by eigenvalues. This is a defining property of square matrices and one of the most important concepts in linear algebra.
Let's try finding the eigenvectors and eigenvalues for a matrix, using Sagemath.
You'll notice that we use the argument RDF as the first argument to the matrix. This stands for "Real Double Field" and indicates to sagemath that we want to create a matrix for doing numerical as opposed to symbolic operations on it. The matrix that sagemath returns will be optimized for operations like finding requiring high numerical precision, like finding eigenvalues and eigenvectors.
An eigenvector before and after being scaled by the matrix A
If we take a bit of time to examine the output of A.eigenvalues_right(), we'll notice some potentially confusing output:
Don't be intimidated! Notice that this is a list of tuples (you can think of a tuple as a type of list). There will be a separate tuple for each eigenvalue/eigenvector pair. The first element of the tuple is an eigenvalue of A. The second element of the list is the corresponding eigenvector for that eigenvalue. You don't need to worry about the third element. (You can read the docstring yourself if you're curious with A.eigenvectors_right?
) To access individual elements of this list, you'll need to use indexing.
Another thing to note: The method that we are calling is named A.eigenvectors_right(). This corresponds to eigenvectors of a matrix A when multiplied with the vector on the right. These are called right eigenvectors. If a vector is an eigenvector of A when multiplied with the vector on the left, then we call that vector a left eigenvector. LS30 will not cover left eigenvectors.
Iterating a Discrete-Time matrix model
Let's say that we have a discrete-time matrix model and we want to iterate to get a time series. We can use a for loop to iterate it forwards in time. We'll use a 2x2 matrix model for this but this will work for systems with more dimensions as well.
If you are just interested in a single populations' progress, we can do that easily:
Here's another fancier example where we record the output and create a time series.
This is the output of zip(*pop_list)
. It separates the time series in pop_list.
Derivatives
You can use the diff
function to take symbolic partial derivatives. The first argument to diff must be a symbolic expression composed of either sagemath built-in functions or symbolic functions that you created yourself. The second argument to diff
tells sagemath which variable to differentiate.
We can see from the above example that sagemath doesn't forget the chain rule. However, since this is a partial derivative only, you might be surprised to learn that sagemath will not include a for you.
Sagemath handles multiple variables, no problem.
The Jacobian
The jacobian of a function is essentially its multidimensional partial derivatives arranged in a matrix. Sagemath has a nice function that you can use for this purpose so that you do not have to use diff
, aptly named jacobian
.
Simply put together a list of symbolic functions and specify which variables to take the partial derivatives with respect to. The order that you specify the variables will affect the way that the matrix elements are laid out.
Let's look at the way that we did ours earlier:
The output of the jacobian matrix will be
Notice how the order of the partial derivatives is , then .
We can change the order of the columns by switching the order of the symbolic variables:
will give us
The nice thing about the jacobian function is that it is easy to use for other things. For example, you can use it as a function and evaluate it at a particular :