Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Lecture slides for UCLA LS 30B, Spring 2020

Views: 14466
License: GPL3
Image: ubuntu2004
Kernel: SageMath 9.3
import numpy as np from ipywidgets import interactive_output, VBox, HBox

Learning goals:

  • Be able to explain what an eigenvector and an eigenvalue of a matrix are.

  • Be able to explain how you can see one eigenvector-eigenvalue pair in the long term behavior of a discrete-time linear model.

Recall the black bear population model:

Assumptions:

  • The population is split into two types of bears: juveniles (JJ) and adults (AA).

  • Each year, on average, 42%42\% of adults give birth to a cub.

  • Each year, 24%24\% of juveniles reach adulthood.

  • Each year, 15%15\% of adult bears die, and 29%29\% of juvenile bears die.

Resulting model:

(Jt+1At+1)=(0.47Jt+0.42At0.24Jt+0.85At)=[0.470.420.240.85](JtAt)\begin{pmatrix} J_{t+1} \\ A_{t+1} \end{pmatrix} = \begin{pmatrix} 0.47 J_t + 0.42 A_t \\ 0.24 J_t + 0.85 A_t \end{pmatrix} = \begin{bmatrix} 0.47 & 0.42 \\ 0.24 & 0.85 \end{bmatrix} \begin{pmatrix} J_t \\ A_t \end{pmatrix}
def matrix_interactive(): M = matrix(RDF, [[0.47, 0.42], [0.24, 0.85]]) show(r"M = " + latex(M)) years = 30 solution = [vector((500.0, 250.0))] for t in range(years + 1): solution.append(M * solution[-1]) @interact(t=slider(0, years, 1, label="$t$")) def update(t): show(LatexExpr(r"t = %d" % (t,))) show(LatexExpr(r"\text{Current state: } \begin{bmatrix} %.1f \\ %.1f \end{bmatrix}" % tuple(solution[t]))) show(LatexExpr(r"\text{Next state: } \begin{bmatrix} %.1f \\ %.1f \end{bmatrix}" % tuple(solution[t + 1]))) matrix_interactive()
M = \left(0.470.420.240.85\begin{array}{rr} 0.47 & 0.42 \\ 0.24 & 0.85 \end{array}\right)
def blackbear_interactive_single(): xmax, ymax = (850, 850) M = matrix(RDF, [[0.47, 0.42], [0.24, 0.85]]) years = 30 solution = [vector((500, 250))] for t in range(years): solution.append(M * solution[-1]) solution = np.array(solution, dtype=float) labeltext = r"\begin{bmatrix} J_{%d} \\ A_{%d} \end{bmatrix} = \begin{bmatrix} %.1f \\ %.1f \end{bmatrix}" @interact(t=slider(0, years, 1, label="$t$")) def update(t): p = list_plot(solution[:t+1], color="blue") p.show(xmin=0, ymin=0, xmax=850, ymax=850, aspect_ratio=1, axes_labels=("$J$", "$A$")) show(LatexExpr(labeltext % (t, t, *solution[t]))) blackbear_interactive_single()
def blackbear_interactive_many(): xmax, ymax = (850, 850) M = matrix(RDF, [[0.47, 0.42], [0.24, 0.85]]) evector = sorted(M.eigenvectors_right())[-1][1][0] length = vector((xmax, ymax)).norm() years = 30 t_range = srange(20) solutions = { "blue": [vector((500, 250))], "darkorange":[vector((500, 50))], "red": [vector((150, 500))], "darkgreen": [vector(( 50, 700))], } for solution in solutions.values(): for t in range(years): solution.append(M * solution[-1]) solutions = {color: np.array(solution, dtype=float) for color, solution in solutions.items()} controls = {color: slider(0, years, 1, label=color) for color in solutions} controls["show_eigenline"] = checkbox(False, label="Show “trend”") @interact(**controls) def update(**controls): p = Graphics() for color, solution in solutions.items(): t = controls[color] p += list_plot(solution[:t+1], color=color) p += list_plot(solution[:t+1], color=color, plotjoined=True) if controls["show_eigenline"]: p += line([evector * -length, evector * length], color="magenta", thickness=3, alpha=0.6) p.show(xmin=0, ymin=0, xmax=850, ymax=850, aspect_ratio=1, axes_labels=("$J$", "$A$")) blackbear_interactive_many()
def blackbear_interactive_vectors(): xmax, ymax = (850, 850) M = matrix(RDF, [[0.47, 0.42], [0.24, 0.85]]) years = 30 t_range = srange(20) solution = [vector((500, 250))] for t in range(years + 1): solution.append(M * solution[-1]) labeltext = r"\begin{bmatrix} J_{%d} \\ A_{%d} \end{bmatrix} = \begin{bmatrix} %.1f \\ %.1f \end{bmatrix}" @interact(t=slider(0, years, 1, label="$t$"), show_vectors=checkbox(False, label="Show arrows"), just2=checkbox(False, label="Just current and next")) def update(t, show_vectors, just2): p = Graphics() if just2: p += plot(solution[t], color="blue") p += plot(solution[t+1], color="green") else: p += list_plot(solution[:t+1], color="blue") if show_vectors: p += plot(solution[t], color="blue") p.show(xmin=0, ymin=0, xmax=850, ymax=850, aspect_ratio=1, axes_labels=("$J$", "$A$")) show(LatexExpr(labeltext % (t, t, *solution[t]))) blackbear_interactive_vectors()

So what we just said is, eventually [the next state]=(some scalar)[the current state] [ \text{the next state} ] = ( \text{some scalar} ) [ \text{the current state} ]

But also, remember, we have a function ff for which [the next state]=f([the current state]) [ \text{the next state} ] = f( [ \text{the current state} ] )

So that means that, eventually, we must have f([the current state])=(some scalar)[the current state] f( [ \text{the current state} ] ) = ( \text{some scalar} ) [ \text{the current state} ]

So what we just said is, eventually [the next state]=(some scalar)[the current state] [ \text{the next state} ] = ( \text{some scalar} ) [ \text{the current state} ]


But also, remember, we have a matrix MM for which [the next state]=M[the current state] [ \text{the next state} ] = M [ \text{the current state} ]


So that means that, eventually, we must have M[the current state]=(some scalar)[the current state] M [ \text{the current state} ] = ( \text{some scalar} ) [ \text{the current state} ]

Definition: Given a linear function f:RnRnf: \mathbb{R}^n \to \mathbb{R}^n, an nn-dimensional vector v\vec{v} is called an eigenvector for ff if v0\vec{v} \neq 0 and f(v)=λv f(\vec{v}) = \lambda \vec{v} for some scalar λ\lambda. In this case, the scalar λ\lambda is called the eigenvalue of ff corresponding to eigenvector v\vec{v}.










Definition: Given an n×nn \times n matrix MM, an nn-dimensional vector v\vec{v} is called an eigenvector for MM if v0\vec{v} \neq 0 and Mv=λv M \vec{v} = \lambda \vec{v} for some scalar λ\lambda. In this case, the scalar λ\lambda is called the eigenvalue of MM corresponding to eigenvector v\vec{v}.

Notes:

  • Here MM is a matrix, v\vec{v} is a vector, and λ\lambda is a scalar.

  • This definition means that eigenvalues and eigenvectors always go together: for any
    eigenvector of a matrix, there is a corresponding eigenvalue, and vice-versa.

  • That symbol λ\lambda is the (lowercase) Greek letter “lambda”. The prefix “eigen” ends up being applied to many things in math and science, all ultimately coming from this definition. It is of German origin, and hence is pronounced as in “Einstein”.

blackbear_interactive_vectors()

Conclusions:

  1. Definition: Given an n×nn \times n matrix MM, a vector v\vec{v} is called an eigenvector for MM with eigenvalue λ\lambda if v0\vec{v} \neq 0 and Mv=λv. M \vec{v} = \lambda \vec{v} .

  2. Eigenvalues and eigenvectors tell us something about the long term behavior of a discrete-time linear model. But there's a lot more to this story, so we'll have to wait for more details here...