Description: Class example of singular value decomposition
Example of Singular Value Decomposition
This is a similar example to the one we did in class. Unfortunately the example we did in class did not save (the notebooks should save automatically but something went wrong). I do not remember the exact values I used in class, but I will try to make my matrix close to the one from class.
Start with a 3×4 matrix or rank 2. The third row is simply the sum of the first and second:
2 3 -1 5
3 7 4 -2
5 10 3 3
Calculate the s.v.d. and save the results for future use:
They seem to be the same. Note that they are actually not exactly the same, due to rounding errors and numerical approximations. For example, while the first entry of the UΣVt matrix shows as 2.0, it is actually a tiny bit off:
Because if that, we cannot actually compare the matrices:
Any vector in the range of A can be written as a linear combination of R1 and R2. In fact, since they form an orthonormal basis, we can find the coefficients of the linear combination simply using a dot product:
Y1=[2;5;7]# this should be in the rangec1=Y1⋅R1c2=Y1⋅R2c1*R1+c2*R2
If we try this with a vector that is not in the range, it will not work:
Y2=[2;5;8]# this is not in the rangec1=Y2⋅R1c2=Y2⋅R2c1*R1+c2*R2
We already know that the vector on the right hand side is in the range of A (it is the vector Y1 above), so this system will have a solution. Since the dimension of the kernel of A is 2, this system will have infinitely many solutions, in fact, a two-parametric family of solutions.
To check that this is a solution of the system, all we need to do is multiply it by A:
Out of all the solutions that the system has, X1 is the one with the lowest norm, and also the unique one that is in the row space of A. To see that it really is in the row space of A, we will try to write it as a linear combination of Rt1 and Rt2 which form an orthonormal basis of the rowspace. Again, since the basis is orthonormal, we can get the coefficients using dot products:
We already know that the vector on the right hand side is not in the range of A (it is the vector Y2 above), so this system will not have a solution. Instead, we will find the "least squares solution", which in some sense is the best approximation to a solution.
We can see that this is not a solution of the system AX=Y2, since we do not get Y2 but only something close toY2. In fact, when comparing this to the result we obtained above when we introduced Y2 in the first place, we can see that AX2 is in fact the orthogonal projection of Y2 onto the range of A. In other words, the closest vector in the range of A to Y2.
Again, this is the least square solution with the smallest norm, or the one that is in the row space of A. We can find other least squares solutions of this system by adding linear combinations of K1 and K2.
Systems with no solutions:
If we wanted to find other right hand sides for the system so that the system either has solutions or has no solutions, we can do this:
If we want the system to have solutions, we need to take the right hand side from the range of A. The range is generated by the first two columns of U, which we called R1 and R2. So for example with the right hand side
On the other hand, if we make the right hand side a linear combination of the columns of U in which the third column, which we called Kt1, has a non-zero coefficient, the resulting system will have no solutions: