Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutSign UpSign In
| Download

This repository contains the course materials from Math 157: Intro to Mathematical Software.

Creative Commons BY-SA 4.0 license.

Views: 3037
License: OTHER
Kernel: Julia

Math 157: Intro to Mathematical Software

UC San Diego, winter 2018

Homework 8: due March 10, 2018

Note: Due to the cancellation of sections and rescheduling of office hours this week, I am preemptively moving the due date of this assignment to Saturday, March 10 at 8pm.

This problem set consists of 6 problems, each of equal value. As usual, please enter all answers within this notebook except as specified, and cite all sources and collaborators.

Throughout this problem set, use the Julia kernel.

Problem 1: Parallel computing

Grading criteria: correctness and thoroughness of explanations. No code required.

1a. Explain what the "GIL" is in Python and its effect on parallel computation.

GIL (Global Interpreter Lock) is a mutex which only allows one thread to execute at any point in time. This is ncessary due to Python's memory management being not thread-safe. Thus, even in the presence of multiple processors, GIL prevents the system from parallel operations, possibly leading to bottlenecks.

1b. Explain what a "race condition" is and how the absence of a GIL pertains to this.

A race condition happens when two or more threads have access to shared data and attempting to change it simultaneously which potentially may lead to the data content being different than what is expected. The absence of a GIL will allow multiple threads to run at the same time. Since Python's memory management is not thread-safe, this may lead to race conditions.

Problem 2: Euclidean algorithm

Grading criteria: correctness of code.

2a. Implement the Euclidean algorithm from scratch (not using any built-in functions other than arithmetic) for multiprecision integers.

#extended euclidean, output is (gcd(a,b),x,y) for gcd(a,b) = ax + by function eea(a, b) s0, s1 = 1,0 t0, t1 = 0,1 while b != 0 q = div(a, b) a, b = b, rem(a, b) s0, s1 = s1, s0 - q*s1 t0, t1 = t1, t0 - q*t1 end if a < 0 return -a,-s0,-t0 else return a,s0,t0 end end
eea (generic function with 1 method)
#test eea(6,6),eea(3,4),eea(15,-12), eea(6,6)[1]
((6, 0, 1), (1, -1, 1), (3, 1, 1), 6)

2b. Implement the Chinese remainder theorem (for two moduli) using your answer to 2a. You should implement a function mimicking Sage's crt function, except that if the moduli are not coprime you may simply return an error. (By contrast, Sage handles this case more robustly: it returns a meaningful error when possible and returns an error otherwise.)

#gives x (mod a_1 a_2) for x_1 mod(a_1) and x_2 mod(a_2), where a_1 and a_2 are coprime function crt2(x_1,a_1,x_2,a_2) g = eea(a_1,a_2) g[1] != 1 && throw(error("gcd($a_1,$a_2) = $g")) return rem( a_1*g[2]*x_2 + a_2*g[3]*x_1,a_1*a_2) end
crt2 (generic function with 1 method)
crt2(6,7,4,8)
20

Problem 3: MATLAB vs. Julia

Grading criterion: correctness of code.

This exercise refers to the Math 18 MATLAB exercise set.

3a. Do MATLAB exercise 4.5 once more, this time using Julia.

P = [0.8100 0.0800 0.1600 0.1000; 0.0900 0.8400 0.0500 0.0800; 0.0600 0.0400 0.7400 0.0400; 0.0400 0.0400 0.0500 0.7800] x0 = [0.5106, 0.4720, 0.0075, 0.0099]
4-element Array{Float64,1}: 0.5106 0.472 0.0075 0.0099
#Part a D = zeros(4,4) D[1:4,1:4] = Diagonal(eig(P)[1]) Q = eig(P)[2] #check P and Q*D*inv(Q)
4×4 Array{Float64,2}: 0.665624 0.767579 0.543214 -0.464102 0.616521 -0.284124 -0.814822 -0.12538 0.29462 -0.568248 0.181071 -0.25076 0.300076 0.0847934 0.0905357 0.840243
#Part b L = [1 0 0 0; 0 0 0 0 ;0 0 0 0 ;0 0 0 0]
4×4 Array{Int64,2}: 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
#Part c Pinf = Q*L*inv(Q)
4×4 Array{Float64,2}: 0.354651 0.354651 0.354651 0.354651 0.328488 0.328488 0.328488 0.328488 0.156977 0.156977 0.156977 0.156977 0.159884 0.159884 0.159884 0.159884
#Part d Pinf*x0
4-element Array{Float64,1}: 0.354651 0.328488 0.156977 0.159884
#Part e y = [0.1; 0.4; 0.3; 0.2] Pinf*y #Observe that the rows of Pinf are the same. For any a + b + c + d = 1 defining y, Pinf*y corresponds to the column(s) of Pinf.
4-element Array{Float64,1}: 0.354651 0.328488 0.156977 0.159884

3b. Repeat with MATLAB exercise 5.6, using the Moore-Penrose pseudoinverse to perform the least squares computation.

#Part a B = [1 75; 1 100; 1 128;1 159;1 195] d = [15;23;26;34;38] Q,R = qr(B) x = Q[:,1] y = Q[:,2] v = dot(x,d)*x + dot(y,d)*y
5-element Array{Float64,1}: 16.5379 21.264 26.5572 32.4176 39.2232
#Part b c = B\v B*c - v #close to 0
5-element Array{Float64,1}: 7.10543e-15 3.55271e-15 0.0 7.10543e-15 7.10543e-15
#Part c pinv(B)*d #check the MATLAB (or previous assignment) solution
2-element Array{Float64,1}: 2.35959 0.189044

Problem 4: Hilbert matrices

Grading criterion: correctness of code and explanations.

4a. Define a Julia function f, which on the input of a positive integer nn, returns the n×nn \times n Hilbert matrix with floating-point entries. Your function should work directly from the definition; however, you may use Nemo's built-in function to check your answer.

function f(n) return [(1 / (i + j -1)) for i in 1:n, j in 1:n] end
f (generic function with 1 method)

4b. Compute the inverse of f(25).

inv(f(25))
25×25 Array{Float64,2}: 124.522 -7318.31 1.25859e5 … -8.86491e7 2.46497e8 -7013.41 4.87504e5 -7.11564e6 1.17621e10 -3.87614e10 1.01954e5 -5.21906e6 -4.23935e7 -4.45757e11 1.51705e12 -1.9105e5 -6.01282e7 4.53363e9 8.7903e12 -2.57216e13 -6.82799e6 1.45549e9 -6.57675e10 -1.07803e14 2.33434e14 6.58087e7 -1.10292e10 4.40594e11 … 8.45246e14 -1.2521e15 -2.72285e8 4.22486e10 -1.59833e12 -4.19031e15 4.13147e15 5.80315e8 -8.64748e10 3.15937e12 1.27427e16 -8.30663e15 -5.95059e8 8.53312e10 -2.99223e12 -2.16864e16 9.36203e15 2.7212e8 -3.64748e10 1.15946e12 1.35809e16 -4.45836e15 -5.08792e8 7.47774e10 -2.6634e12 … 1.27447e16 4.97697e14 9.60096e8 -1.44241e11 5.31184e12 -1.57445e16 -2.734e15 3.34337e8 -4.80938e10 1.66764e12 -1.73947e16 1.43425e15 -1.54695e9 2.27436e11 -8.1764e12 2.36807e16 3.3041e15 -5.62902e8 8.64273e10 -3.24719e12 9.98229e14 4.85024e15 1.65923e9 -2.44529e11 8.80698e12 … 3.79574e15 -1.04059e16 1.28029e9 -2.00173e11 7.68902e12 7.73666e15 -9.53343e15 -1.40568e9 2.1626e11 -8.15774e12 -4.3861e16 1.6796e16 -2.11335e9 3.23838e11 -1.21805e13 -3.97404e15 1.16819e16 1.79606e9 -2.81832e11 1.08516e13 4.82857e16 -1.80002e16 1.44557e9 -2.20973e11 8.24941e12 … 3.92131e16 -1.70281e16 -2.39411e9 3.7826e11 -1.45903e13 -1.01144e17 3.47404e16 1.58023e9 -2.50004e11 9.65152e12 4.03802e16 -1.80416e16 -7.58041e8 1.1611e11 -4.3472e12 1.17165e16 2.30639e15 1.90027e8 -2.82552e10 1.0271e12 -7.6259e15 4.46656e14

4c. Is Julia's default algorithm for matrix inverse numerically stable? Justify your claim in terms of your answer to 4b.

inv(f(25))*f(25) #Not numerically stable as we should expect an identity matrix
25×25 Array{Float64,2}: 1.0 -7.06758e-9 7.34974e-10 … 9.95948e-9 2.98023e-8 -1.77992e-5 0.999995 1.20997e-6 1.8727e-6 -3.8147e-6 3.08653e-5 6.42741e-5 0.999902 3.89357e-5 0.000244141 0.003211 0.00135359 0.00399982 0.00100631 -0.00390625 0.0212881 0.0347532 0.00440288 -0.0071059 0.03125 0.274582 0.176413 0.102111 … -0.0363124 -0.1875 0.28469 0.299686 -0.233031 -0.384986 0.25 -1.57692 -0.748504 -0.151903 0.0825231 -1.0 0.575295 0.611334 0.658993 -0.0629104 1.25 0.176288 -0.261923 0.0971292 0.187448 -0.5 1.04963 -0.0791658 -1.08117 … -0.085211 0.0 -1.00915 -0.858001 0.338376 0.294828 -1.0 -0.323806 0.548735 0.854401 -0.0980128 -0.75 1.984 2.62965 0.125151 0.496179 0.5 0.889039 0.786798 -0.166222 -0.208734 1.25 0.103835 -0.805871 0.407968 … -1.10255 -2.0 -2.31544 -0.784777 0.913119 0.708942 -0.5 0.978985 0.362783 -0.705828 0.0639091 0.5 3.22223 1.33263 1.02459 0.33024 2.0 -0.989988 -0.74997 -0.625029 0.124984 -0.28125 0.650821 0.434798 -0.358509 … -0.209474 -0.5 3.31893 2.55494 1.82672 -0.706843 -0.25 -0.620023 -0.567366 -1.4305 0.125032 -1.0 0.83567 -0.594835 -0.668515 0.562437 0.5 -0.0525968 -0.10752 0.0675857 -0.0499957 0.693349

4d. In light of the Julia mission statement:

Julia is a high-level, high-performance dynamic programming language for numerical computing.

explain why your answer to 4c is to be expected (that is, why the Julia designers chose as they did in this case).

We are using floaring point numbers and the size of the float determines its precision. Rounding errors are thus inevitable when storing such numbers and accumulate. It can be seen from this example that the developers placed more emphasis on the computational speed than the accuracy (as inverting large matrices is usually a costly process).

Problem 5: Hadamard matrices

Grading criterion: correctness of code and explanations.

For this exercise, you will use the Nemo library.

using Nemo
Welcome to Nemo version 0.7.3 Nemo comes with absolutely no warranty whatsoever

5a. Using Nemo's built-in function, construct a 28×2828 \times 28 Hadamard matrix. Call this matrix HH.

M = MatrixSpace(ZZ, 28, 28) H = hadamard(M)
[1 -1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1] [-1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1 1 -1] [1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1] [1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1] [1 1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1] [1 -1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1] [1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1] [1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1] [1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1] [1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1] [1 1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1] [1 -1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1] [1 1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1] [1 -1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1] [1 1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1] [1 -1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1] [1 1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1 -1 -1] [1 -1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1 -1 1] [1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1 1 1] [1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1 1 -1] [1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1 1 1] [1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1 1 -1] [1 1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1 -1 -1] [1 -1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1 -1 1] [1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 1 1] [1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1 1 -1] [1 1 1 1 -1 -1 1 1 1 1 -1 -1 -1 -1 -1 -1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1] [1 -1 1 -1 -1 1 1 -1 1 -1 -1 1 -1 1 -1 1 -1 1 1 -1 1 -1 -1 1 1 -1 -1 -1]
A = [[Int(H[i,j]) for i in 1:28] for j in 1:28]
28-element Array{Array{Int64,1},1}: [1, -1, 1, 1, 1, 1, 1, 1, 1, 1 … 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] [-1, -1, 1, -1, 1, -1, 1, -1, 1, -1 … 1, -1, 1, -1, 1, -1, 1, -1, 1, -1] [1, 1, 1, -1, 1, 1, -1, -1, 1, 1 … -1, -1, 1, 1, 1, 1, -1, -1, 1, 1] [1, -1, -1, -1, 1, -1, -1, 1, 1, -1 … -1, 1, 1, -1, 1, -1, -1, 1, 1, -1] [1, 1, 1, 1, 1, -1, 1, 1, -1, -1 … -1, -1, -1, -1, 1, 1, 1, 1, -1, -1] [1, -1, 1, -1, -1, -1, 1, -1, -1, 1 … -1, 1, -1, 1, 1, -1, 1, -1, -1, 1] [1, 1, -1, -1, 1, 1, 1, -1, 1, 1 … -1, -1, -1, -1, -1, -1, 1, 1, 1, 1] [1, -1, -1, 1, 1, -1, -1, -1, 1, -1 … -1, 1, -1, 1, -1, 1, 1, -1, 1, -1] [1, 1, 1, 1, -1, -1, 1, 1, 1, -1 … -1, -1, -1, -1, -1, -1, -1, -1, 1, 1] [1, -1, 1, -1, -1, 1, 1, -1, -1, -1 … -1, 1, -1, 1, -1, 1, -1, 1, 1, -1] [1, 1, 1, 1, 1, 1, -1, -1, 1, 1 … 1, 1, -1, -1, -1, -1, -1, -1, -1, -1] [1, -1, 1, -1, 1, -1, -1, 1, 1, -1 … 1, -1, -1, 1, -1, 1, -1, 1, -1, 1] [1, 1, -1, -1, 1, 1, 1, 1, -1, -1 … 1, 1, 1, 1, -1, -1, -1, -1, -1, -1] ⋮ [1, 1, -1, -1, -1, -1, -1, -1, 1, 1 … 1, 1, -1, -1, 1, 1, 1, 1, -1, -1] [1, -1, -1, 1, -1, 1, -1, 1, 1, -1 … 1, -1, -1, 1, 1, -1, 1, -1, -1, 1] [1, 1, -1, -1, -1, -1, -1, -1, -1, -1 … 1, -1, 1, 1, -1, -1, 1, 1, 1, 1] [1, -1, -1, 1, -1, 1, -1, 1, -1, 1 … -1, -1, 1, -1, -1, 1, 1, -1, 1, -1] [1, 1, 1, 1, -1, -1, -1, -1, -1, -1 … 1, 1, 1, -1, 1, 1, -1, -1, 1, 1] [1, -1, 1, -1, -1, 1, -1, 1, -1, 1 … 1, -1, -1, -1, 1, -1, -1, 1, 1, -1] [1, 1, 1, 1, 1, 1, -1, -1, -1, -1 … -1, -1, 1, 1, 1, -1, 1, 1, -1, -1] [1, -1, 1, -1, 1, -1, -1, 1, -1, 1 … -1, 1, 1, -1, -1, -1, 1, -1, -1, 1] [1, 1, -1, -1, 1, 1, 1, 1, -1, -1 … 1, 1, -1, -1, 1, 1, 1, -1, 1, 1] [1, -1, -1, 1, 1, -1, 1, -1, -1, 1 … 1, -1, -1, 1, 1, -1, -1, -1, 1, -1] [1, 1, 1, 1, -1, -1, 1, 1, 1, 1 … 1, 1, 1, 1, -1, -1, 1, 1, 1, -1] [1, -1, 1, -1, -1, 1, 1, -1, 1, -1 … 1, -1, 1, -1, -1, 1, 1, -1, -1, -1]

5b. Compute the determinant of HH and check that it achieves equality in the Hadamard bound.

detH = det(H) bound = BigInt(28)^14 detH == bound
true

5c. Compute the dot products between all pairs of rows in HH.

dotH = H*transpose(H) #Note that the (i,j) entry of this matrix is the dot product of the ith and jth row #should be the same as 28*I_28
[28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28 0] [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 28]

5d. Explain the relationship between your answers to 5b and 5c.

det(dotH) == detH^2
true

Problem 6: Singular value decomposition

Grading criterion: correctness and pertinence of code. (Correctness means it runs, pertinence means you followed the instructions.)

Consider the following example from the numpy documentation.

>>> a = np.random.randn(9, 6) + 1j*np.random.randn(9, 6) >>> b = np.random.randn(2, 7, 8, 3) + 1j*np.random.randn(2, 7, 8, 3) >>> u, s, vh = np.linalg.svd(a, full_matrices=True) >>> u.shape, s.shape, vh.shape ((9, 9), (6,), (6, 6)) >>> np.allclose(a, np.dot(u[:, :6] * s, vh)) True >>> smat = np.zeros((9, 6), dtype=complex) >>> smat[:6, :6] = np.diag(s) >>> np.allclose(a, np.dot(u, np.dot(smat, vh))) True

Write Julia code to emulate this example as faithfully as possible.

a = randn(9,6) + 1im*randn(9,6) b = randn(2,7,8,3) + 1im*randn(2,7,8,3) u,s,vh = svd(a, thin =false) size(u),size(s),size(vh)
((9, 9), (6,), (6, 6))
#Julia equivalent of allclose isapprox(a,(u[:, 1:6].*s') * vh' )
true
smat = zeros(Complex64, 9,6) smat[1:6,1:6] = diagm(s) isapprox(a, (u*(smat*vh')) , atol = .1 )
true