Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Public worksheets for UCLA's Mathematics for Life Scientists course

Views: 9197

Recall the black bear population model:

[Jt+1At+1]=[0.650.50.250.9][JtAt]\begin{bmatrix} J_{t+1} \\ A_{t+1} \end{bmatrix} = \begin{bmatrix} 0.65 & 0.5 \\ 0.25 & 0.9 \end{bmatrix} \begin{bmatrix} J_t \\ A_t \end{bmatrix}

The interactive below shows its behavior. As usual, to use interactives in this worksheet, click “Open in CoCalc” at the top right of this page.

@interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), show_eigenvectors=checkbox(False, label="Show eigenvector line(s)")): M = matrix(RDF, [ [0.65, 0.5], [0.25, 0.9], ]) ics = { (2, 5): "blue", (7, 4): "purple", (0, 3): "gold", (9, 0): "fuchsia", } p = Graphics() if show_eigenvectors: for eval, evec, mult in M.eigenvectors_right(): if imag(eval): continue slope = evec[0][1]/evec[0][0] color = "green" if abs(eval) < 1 else "red" p += plot(slope*x, (x, -20, 20), color=color, thickness=2) for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(M*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(p, xmin=-5, ymin=-5, xmax=20, ymax=20, aspect_ratio=1, axes_labels=("$J$", "$A$"))
Interact: please open in CoCalc

Below is another interactive showing what happens to states along the eigenvector lines, including states in the other three quadrants. Even though this is biologically meaningless, it will help us to understand the overall behavior of the system.

@interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), show_eigenvectors=checkbox(False, label="Show eigenvector line(s)")): M = matrix(RDF, [ [0.65, 0.5], [0.25, 0.9], ]) ics = { (1, 1): "darkred", (-1, -1): "darkred", (-20, 10): "darkgreen", (20, -10): "darkgreen", } p = Graphics() if show_eigenvectors: for eval, evec, mult in M.eigenvectors_right(): if imag(eval): continue slope = evec[0][1]/evec[0][0] color = "green" if abs(eval) < 1 else "red" p += plot(slope*x, (x, -20, 20), color=color, thickness=2) for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(M*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(p, xmin=-20, ymin=-20, xmax=20, ymax=20, aspect_ratio=1, axes_labels=("$J$", "$A$"))
Interact: please open in CoCalc

Diagonal matrices:

[λ1000λ2000λ3]\begin{bmatrix} \lambda_1 & 0 & 0 \\ 0 & \lambda_2 & 0 \\ 0 & 0 & \lambda_3 \end{bmatrix}

Growth along one axis, decay along the other:

First look at what happens just along the axes.

D = diagonal_matrix(RDF, [1.2, 0.7]) size = 5 @interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), ): ics = { (0, 5): "black", (1, 0): "black", } p = Graphics() for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(D*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(D) p.show(xmin=-size, ymin=-size, xmax=size, ymax=size, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

Now, see what happens when you put both together.

D = diagonal_matrix(RDF, [1.2, 0.7]) size = 5 @interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), ): ics = { (0, 5): "black", (1, 0): "black", (1, 5): "black", (-1, 5): "black", (-1, -5): "black", (1, -5): "black", } p = Graphics() for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(D*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(D) p.show(xmin=-size, ymin=-size, xmax=size, ymax=size, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

Note that the behavior above shows that the equilibrium point at the origin is a saddle point.

Note also that the long-term behavior is growth along the XX axis, because that is the axis that has exponential growth, caused by the larger eigenvalue 1.2.

Growth along both axes:

D = diagonal_matrix(RDF, [1.1, 1.3]) size = 8 @interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), ): ics = { (0, 1): "black", (1, 0): "black", } p = Graphics() for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(D*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(D) p.show(xmin=-size, ymin=-size, xmax=size, ymax=size, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc
D = diagonal_matrix(RDF, [1.1, 1.3]) size = 8 @interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), ): ics = { (0, 1): "black", (1, 0): "black", (1, 1): "black", (-1, 1): "black", (-1, -1): "black", (1, -1): "black", } p = Graphics() for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(D*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(D) p.show(xmin=-size, ymin=-size, xmax=size, ymax=size, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

In the case above, we see that the equilibrium point at the origin is unstable (a source). This should not be surprising at all.

More importantly, we also see that, although both the XX and YY are increasing (and will increase exponentially forever), the long-term behavior is primarily growth in the YY direction, because this axis has the faster exponential growth rate. Again, that faster growth rate is due to the larger of the two eigenvalues, 1.3.

Decay along both axes

D = diagonal_matrix(RDF, [0.9, 0.6]) size = 5 @interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), ): ics = { (0, 5): "black", (5, 0): "black", } p = Graphics() for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(D*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(D) p.show(xmin=-size, ymin=-size, xmax=size, ymax=size, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc
D = diagonal_matrix(RDF, [0.9, 0.6]) size = 5 @interact def discretetime(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), ): ics = { (0, 5): "black", (5, 0): "black", (5, 5): "black", (-5, 5): "black", (-5, -5): "black", (5, -5): "black", } p = Graphics() for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(D*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(D) p.show(xmin=-size, ymin=-size, xmax=size, ymax=size, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

Again, it should not be surprising that the above equilibrium point is stable (a sink).

More importantly, in the above case we see that, although both XX and YY are decaying exponentially, the trajectories always eventually approach the origin (equilibrium point) along the XX axis. So the long-term behavior is exponential decay (at 10% per iteration) along the XX axis. This time, this is caused by the fact that the XX axis has the slower of the two decay rates. But the slower of the two decay rates still means the larger of the two eigenvalues!

Important conclusion!

In all of the cases above, the conclusion was that the larger of the two eigenvalues determined the long-term behavior of the system. This largest eigenvalue is called the dominant eigenvalue. Actually, since the exponential growth/decay rates are determined by the absolute value of the corresponding eigenvalue, the dominant eigenvalue is the one whose abolute value is largest. To be more specific about its meaning:

  • This dominant eigenvalue determines the long-term growth (or decay) rate of the system, and

  • That growing (or decaying) in the long run will be primarily in the direction of (i.e. parallel to) the axis that corresponds to that eigenvalue.

Coordinates

First, standard X,YX, Y coordinates

e1 = vector([1, 0]) e2 = vector([0, 1]) color1 = colors["blue"].darker(0.2) color2 = colors["gold"].darker(0.2) size = 5 basisvectors = plot(e1, color=color1) + plot(e2, color=color2) old_grid = Graphics() for c in xrange(-size + 1, size): old_grid += line(((-size, c), (size, c)), color=colors["white"].darker(0.1)) old_grid += line(((c, -size), (c, size)), color=colors["white"].darker(0.1)) @interact def oldcoordinates(x=(-size, size, 1), y=(-size, size, 1), show_old_grid=checkbox(True, label=u"Original x-y \u201ccoordinate grid\u201d"), show_vectors=checkbox(True, label="Standard basis vectors"), show_solution=checkbox(False, label="Solution to point along standard basis vectors"), ): p = Graphics() if show_old_grid: p += old_grid if show_vectors: p += basisvectors if show_solution: p += plot(x*e1, color=color1.darker(0.4)) p += plot(y*e2, color=color2.darker(0.4), start=x*e1) print("x = {}, y = {}. That is, ({}, {}) = {} e1 + {} e2".format(x, y, x, y, x, y)) p += point((x, y), color="black", size=40) show(p, xmin=-size, xmax=size, ymin=-size, ymax=size, aspect_ratio=1, figsize=6, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

Coordinates with respect to a different basis:

u1=[11],u2=[21]\vec{u}_1 = \begin{bmatrix} 1 \\ 1 \end{bmatrix} , \qquad \vec{u}_2 = \begin{bmatrix} -2 \\ 1 \end{bmatrix}
u1 = vector([1, 1]) u2 = vector([-2, 1]) T = column_matrix([u1, u2]) color1 = colors["red"].darker(0.2) color2 = colors["green"].darker(0.2) size = 8 eigenvectors = plot(u1, color=color1) + plot(u2, color=color2) old_grid = Graphics() for c in xrange(-size + 1, size): old_grid += line(((-size, c), (size, c)), color=colors["white"].darker(0.1)) old_grid += line(((c, -size), (c, size)), color=colors["white"].darker(0.1)) new_grid = Graphics() for i in xrange(-2*size, 2*size + 1): if i == 0: new_grid += line((-2*size*u1, 2*size*u1), color=color1, thickness=2) new_grid += text("$R$", u1/abs(u1[0])*size*1.1, color=color1, fontsize=16) new_grid += line((-2*size*u2, 2*size*u2), color=color2, thickness=2) new_grid += text("$S$", u2/abs(u2[0])*size*1.1, color=color2, fontsize=16) else: new_grid += line((u2*i - 2*size*u1, u2*i + 2*size*u1), color=color1.lighter(0.7)) new_grid += line((u1*i - 2*size*u2, u1*i + 2*size*u2), color=color2.lighter(0.7)) @interact def newcoordinates(x=(-size, size, 1), y=(-size, size, 1), show_old_grid=checkbox(False, label=u"Original x-y \u201ccoordinate grid\u201d"), show_vectors=checkbox(True, label="Eigenvectors"), show_new_grid=checkbox(False, label=u"Eigenvector \u201ccoordinate grid\u201d"), show_solution=checkbox(False, label="Solution to point along eigenvectors"), ): p = Graphics() if show_old_grid: p += old_grid if show_vectors: p += eigenvectors if show_new_grid: p += new_grid if show_solution: r, s = T.inverse()*vector([x, y]) p += plot(r*u1, color=color1.darker(0.4)) p += plot(s*u2, color=color2.darker(0.4), start=r*u1) print("r = {}, s = {}. That is, ({}, {}) = {} u1 + {} u2".format(r, s, x, y, r, s)) p += point((x, y), color="black", size=40) show(p, xmin=-size, xmax=size, ymin=-size, ymax=size, figsize=6, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

Putting it all together...

The black bear population model again, using the new “R,SR, S-coordinates”

Note that in the black bear population model, the eigenvectors of the matrix were exactly the same vectors u1\vec{u}_1 and u2\vec{u}_2 that we used just now to define the R,SR, S-coordinate system. The eigenvalue corresponding to u1\vec{u}_1 was 1.15, and the eigenvalue corresponding to u2\vec{u}_2 was 0.4.

First, a reminder of what that means about exponential growth/decay along the eigenvector lines:

M = matrix(RDF, [ [0.65, 0.5], [0.25, 0.9], ]) u1 = vector([1, 1]) u2 = vector([-2, 1]) T = column_matrix([u1, u2]) color1 = colors["red"].darker(0.2) color2 = colors["green"].darker(0.2) size = 12 eigenvectors = plot(u1, color=color1, arrowsize=3) + plot(u2, color=color2, arrowsize=3) old_grid = Graphics() for c in xrange(-size + 1, size): old_grid += line(((-size, c), (size, c)), color=colors["white"].darker(0.1)) old_grid += line(((c, -size), (c, size)), color=colors["white"].darker(0.1)) new_grid = Graphics() for i in xrange(-2*size, 2*size + 1): if i == 0: new_grid += line((-2*size*u1, 2*size*u1), color=color1, thickness=2) new_grid += text("$R$", u1/abs(u1[0])*size*1.1, color=color1, fontsize=16) new_grid += line((-2*size*u2, 2*size*u2), color=color2, thickness=2) new_grid += text("$S$", u2/abs(u2[0])*size*1.1, color=color2, fontsize=16) else: new_grid += line((u2*i - 2*size*u1, u2*i + 2*size*u1), color=color1.lighter(0.7)) new_grid += line((u1*i - 2*size*u2, u1*i + 2*size*u2), color=color2.lighter(0.7)) ics = { tuple(u1): color1.darker(0.6), tuple(u2/abs(u2[0])*size): color2.darker(0.6), } @interact def newcoordinates(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), show_old_grid=checkbox(True, label=u"Original x-y \u201ccoordinate grid\u201d"), show_vectors=checkbox(False, label="Eigenvectors"), show_new_grid=checkbox(False, label=u"Eigenvector \u201ccoordinate grid\u201d"), ): p = Graphics() if show_old_grid: p += old_grid if show_vectors: p += eigenvectors if show_new_grid: p += new_grid for state, color in ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(M*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(M) show(p, xmin=-size, xmax=size, ymin=-size, ymax=size, figsize=10, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

Now, explore how those other trajectories (that don't just start on the eigenvector lines) happen:

Be sure to click the “show projections” checkbox after simulating one of the trajectories.

You can also uncomment other initial conditions (find “ics” in the code) to see other trajectories.

M = matrix(RDF, [ [0.65, 0.5], [0.25, 0.9], ]) u1 = vector([1, 1]) u2 = vector([-2, 1]) T = column_matrix([u1, u2]) color1 = colors["red"].darker(0.2) color2 = colors["green"].darker(0.2) size = 12 eigenvectors = plot(u1, color=color1, arrowsize=3) + plot(u2, color=color2, arrowsize=3) old_grid = Graphics() for c in xrange(-size + 1, size): old_grid += line(((-size, c), (size, c)), color=colors["white"].darker(0.1)) old_grid += line(((c, -size), (c, size)), color=colors["white"].darker(0.1)) new_grid = Graphics() for i in xrange(-2*size, 2*size + 1): if i == 0: new_grid += line((-2*size*u1, 2*size*u1), color=color1, thickness=2) new_grid += text("$R$", u1/abs(u1[0])*size*1.1, color=color1, fontsize=16) new_grid += line((-2*size*u2, 2*size*u2), color=color2, thickness=2) new_grid += text("$S$", u2/abs(u2[0])*size*1.1, color=color2, fontsize=16) else: new_grid += line((u2*i - 2*size*u1, u2*i + 2*size*u1), color=color1.lighter(0.7)) new_grid += line((u1*i - 2*size*u2, u1*i + 2*size*u2), color=color2.lighter(0.7)) ics = { ( 2, 5): "blue", #( 7, 4): "purple", #( 0, 3): "gold", #( 9, 0): "fuchsia", } @interact def newcoordinates(iterations=(0, 20, 1), plotjoined=checkbox(True, label="Connect dots"), show_old_grid=checkbox(False, label=u"Original x-y \u201ccoordinate grid\u201d"), show_vectors=checkbox(False, label="Eigenvectors"), show_new_grid=checkbox(True, label=u"Eigenvector \u201ccoordinate grid\u201d"), show_projections=checkbox(False, label="Show projections onto eigenvector axes"), only=checkbox(False, label="Show only those projections"), ): p = Graphics() if show_old_grid: p += old_grid if show_vectors: p += eigenvectors if show_new_grid: p += new_grid if show_projections: all_ics = {} for state, color in ics.items(): R, S = T.inverse()*vector(state) all_ics[tuple(R*u1)] = color1.darker(0.6) all_ics[tuple(S*u2)] = color2.darker(0.6) if not only: all_ics[state] = color else: all_ics = ics for state, color in all_ics.items(): states = [vector(state)] for i in xrange(iterations): states.append(M*states[-1]) p += list_plot(states, plotjoined=False, color=color, size=50) if plotjoined: p += list_plot(states, plotjoined=True, color=color) show(M) show(p, xmin=-size, xmax=size, ymin=-size, ymax=size, figsize=10, aspect_ratio=1, axes_labels=("$X$", "$Y$"))
Interact: please open in CoCalc

The above simulation(s) show the point of diagonalizing a matrix:

When you treat the system entirely in the R,SR, S-coordinate system defined by the eigenvectors, the matrix becomes diagonal, which means that the behavior of the entire system can be understood just like for a diagonal matrix. The only difference is that instead of thinking in terms of X,YX, Y-coordinates, and all the growth and decay happening along the XX and YY axes, now everything in the state space must be handled in the R,SR, S-coordinate system, and all the exponential growth/decay happens along the axes of the that coordinate system... which are the eigenvector lines.

In particular, this means that everything we said above about the dominant eigenvector, and the types of equilibrium point at the origin, all carry over to arbitrary matrices, not just diagonal ones.

Examples of diagonalizing

Black bear population model:

M = matrix(RDF, [ [0.65, 0.5], [0.25, 0.9], ]) show("$M = $", M) M.eigenvalues() u1 = vector([1, 1]) u2 = vector([-2, 1]) T = column_matrix([u1, u2]) show("$T = $", T) print("\n\n") D = T.inverse()*M*T show("$D = $", D.round(6))
M=M = (0.650.50.250.9)\displaystyle \left(\begin{array}{rr} 0.65 & 0.5 \\ 0.25 & 0.9 \end{array}\right)
[0.4, 1.15]
T=T = (1211)\displaystyle \left(\begin{array}{rr} 1 & -2 \\ 1 & 1 \end{array}\right)
D=D = (1.150.00.00.4)\displaystyle \left(\begin{array}{rr} 1.15 & 0.0 \\ -0.0 & 0.4 \end{array}\right)

Mares and fillies model from Midterm 2:

M = matrix(RDF, [ [0.35, 0.25], [0.45, 0.95], ]) show("$M = $", M) [round(eigenvalue, 5) for eigenvalue in M.eigenvalues()] u1 = vector([1, 3]) u2 = vector([5, -3]) T = column_matrix([u1, u2]) show("$T = $", T) print("\n\n") D = T.inverse()*M*T show("$D = $", D.round(6))
M=M = (0.350.250.450.95)\displaystyle \left(\begin{array}{rr} 0.35 & 0.25 \\ 0.45 & 0.95 \end{array}\right)
[0.2, 1.1]
T=T = (1533)\displaystyle \left(\begin{array}{rr} 1 & 5 \\ 3 & -3 \end{array}\right)
D=D = (1.10.00.00.2)\displaystyle \left(\begin{array}{rr} 1.1 & 0.0 \\ 0.0 & 0.2 \end{array}\right)

Lionfish model from lab:

This one is 3-dimensional, but diagonalization still works the same way.

M = matrix(RDF, [ [0, 0, 35315 ], [0.00003, 0.777, 0 ], [0, 0.071, 0.949], ]) show("$M = $", M) M.eigenvalues() T = column_matrix([evec[0] for eval, evec, mult in M.eigenvectors_right()]) show("$T = $", T.round(8)) print("\n\n") D = T.inverse()*M*T show("$D = $", D.round(4))
M=M = (0.00.035315.03×10050.7770.00.00.0710.949)\displaystyle \left(\begin{array}{rrr} 0.0 & 0.0 & 35315.0 \\ 3 \times 10^{-05} & 0.777 & 0.0 \\ 0.0 & 0.071 & 0.949 \end{array}\right)
[0.15026156162015658, 0.44126018150798385, 1.1344782568718599]
T=T = (1.01.01.04.787×10058.935×10058.392×10054.25×10061.249×10053.212×1005)\displaystyle \left(\begin{array}{rrr} 1.0 & -1.0 & 1.0 \\ -4.787 \times 10^{-05} & 8.935 \times 10^{-05} & 8.392 \times 10^{-05} \\ 4.25 \times 10^{-06} & -1.249 \times 10^{-05} & 3.212 \times 10^{-05} \end{array}\right)
D=D = (0.15030.00.00.00.44130.00.00.01.1345)\displaystyle \left(\begin{array}{rrr} 0.1503 & -0.0 & -0.0 \\ 0.0 & 0.4413 & -0.0 \\ -0.0 & -0.0 & 1.1345 \end{array}\right)

What about complex eigenvalues?

Here is a 3-dimensional model that has a pair of complex eigenvalues. Recall that complex eigenvalues give rise to rotating/oscillating behavior (usually combined with exponential growth (spiralling outward) or exponential decay (spiralling inward), depending on whether the absolute value of those eigenvalues is greater than or less than 1).

M = matrix(RDF, [ [ 0.46, 0.64, 0.08], [-0.08, 1.18, -0.64], [-0.72, 0.72, 0.54], ]) show("$M = $", M) print("\n") for eigenvalue in M.eigenvalues(): show(eigenvalue)
M=M = (0.460.640.080.081.180.640.720.720.54)\displaystyle \left(\begin{array}{rrr} 0.46 & 0.64 & 0.08 \\ -0.08 & 1.18 & -0.64 \\ -0.72 & 0.72 & 0.54 \end{array}\right)
0.54+0.72i\displaystyle 0.54 + 0.72i
0.540.72i\displaystyle 0.54 - 0.72i
1.1\displaystyle 1.1

One way of handling this: just use the complex eigenvectors

This diagonalizes the matrix perfectly, as usual. But what does it really tell us about where the trajectories will go and how they will behave?

T = column_matrix([evec[0]*sqrt(2).n() for eval, evec, mult in M.eigenvectors_right()]) show("$T = $", T.round(8)) print("\n\n") D = T.inverse()*M*T show("$D = $", D.round(4))
T=T = (0.50.5i0.5+0.5i1.00.5+0.5i0.50.5i1.01.01.00.0)\displaystyle \left(\begin{array}{rrr} 0.5 - 0.5i & 0.5 + 0.5i & -1.0 \\ 0.5 + 0.5i & 0.5 - 0.5i & -1.0 \\ 1.0 & 1.0 & 0.0 \end{array}\right)
D=D = (0.54+0.72i0.00.00.00.540.72i0.00.00.01.1)\displaystyle \left(\begin{array}{rrr} 0.54 + 0.72i & 0.0 & 0.0 \\ 0.0 & 0.54 - 0.72i & 0.0 \\ 0.0 & 0.0 & 1.1 \end{array}\right)

Perhaps a better way:

Take one of the complex eigenvectors (it doesn't matter which one, since they're complex conjugates of each other) and separate its real and imaginary parts into two separate vectors. Use these to form the columns of the transformation matrix TT. Now look what happens...

T = transformation_matrix(M)*sqrt(2).n() show("$T = $", T.round(8)) print("\n\n") D = T.inverse()*M*T show("$D = $", D.round(4))
T=T = (0.50.51.00.50.51.01.00.00.0)\displaystyle \left(\begin{array}{rrr} 0.5 & -0.5 & -1.0 \\ 0.5 & 0.5 & -1.0 \\ 1.0 & 0.0 & -0.0 \end{array}\right)
D=D = (0.540.720.00.720.540.00.00.01.1)\displaystyle \left(\begin{array}{rrr} 0.54 & 0.72 & 0.0 \\ -0.72 & 0.54 & 0.0 \\ 0.0 & 0.0 & 1.1 \end{array}\right)

This matrix is not diagonal, but it is close: it is called a “block diagonal” matrix. The 2×2 “block” that appears in the upper left part of the matrix has the form [abba] \begin{bmatrix} a & b \\ -b & a \end{bmatrix} where the aa and bb are exactly the real and imaginary parts of the complex eigenvalues of our original matrix! The eigenvalues of this 2x2 block, by itself, are also exactly a+bia + bi and abia - bi, the same as those of our original matrix.

A 2×2 matrix of this form, [abba]\begin{bmatrix} a & b \\ -b & a \end{bmatrix}, as a discrete-time linear model, will have trajectories that rotate/oscillate around the origin, as they exponentially grow (spiral outward) or exponentially decay (spiral inward). Once again, whether they grow or decay, and how fast, depends on the value of a2+b2\sqrt{a^2 + b^2}... which is just the absolute value of the two complex eigenvalues. Note that in our example,

a2+b2=0.542+0.722=0.9<1\sqrt{a^2 + b^2} = \sqrt{0.54^2 + 0.72^2} = 0.9 < 1

so this 2×2 block will cause spiralling inward.

What does all of the above mean for the behavior of our original 3×3 matrix MM? The matrix TT given above still works as a transformation matrix that converts between coordinate systems, in the usual way: the columns of TT are a basis (of R3\RR^3), and this basis defines a new coordinate system, which we might call Q,R,SQ, R, S-coordinates. What the block-diagonal form of DD means is that in the Q,RQ, R-plane in this coordinate system, trajectories will spiral inward. Along the SS axis, trajectories will exponentially grow (at 10% per iteration). Everywhere else in the state space, the behavior will be a combination of these two features, just as we've seen in all the previous examples. Here is a 3-D rendering of this, showing the Q,RQ, R-plane (in green), the SS-axis (in red), and a typical trajectory that starts close to the Q,RQ, R-plane. This is one version of a 3-dimensional saddle point, one that includes rotating/spiralling behavior.

ics = {(3, -3, 0.1): 46} plot_eigenspaces3d(M, ics).show(frame=False)
3D rendering not yet implemented