Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Notebook bij de opgaven van week 1 (H1 & H2)

Project: WISB251
Views: 95
Kernel: Python 3 (Ubuntu Linux)

Chapter 1 - Numerical Algorithms

import numpy as np import matplotlib.pyplot as plt

Approximating the derivative

We can approximate the derivative of a function ff at xx as f(x)f(x+h)f(x)h,f'(x) \approx \frac{f(x + h) - f(x)}{h}, where the error is of order O(h)\mathcal{O}(h). For f(x)=sin(x)f(x) = \sin(x), for example we get

x = 1.2 h = 1e-4 approximation = (np.sin(x + h) - np.sin(x))/h derivative = np.cos(x) error = abs(derivative - approximation) print('approximation : ', approximation) print('derivative : ', derivative) print('error : ', error)
approximation : 0.362311151919 derivative : 0.362357754477 error : 4.66025575812e-05

We can investigate how the error behaves as hh gets smaller. To plot the error as a function of hh, we can evaluate the error for multiple hh at the same time (note the subtle changes in the Matlab code used to compute the approximation).

x = 1.2; h = np.logspace(-20,0,21) approximation = (np.sin(x + h) - np.sin(x))/h derivative = np.cos(x) error = abs(derivative - approximation) plt.loglog(h,error,h,h) plt.xlabel('$h$') plt.ylabel('absolute error')
<matplotlib.text.Text at 0x7ff8c6413048>
Image in a Jupyter notebook

Do Exercise 1.4.1

A better approximation of the derivative is supposedly given by f(x)f(x+h)f(xh)2h,f'(x) \approx \frac{f(x + h) - f(x - h)}{2h},

Do Exercise 1.4.2 and show that the error is indeed of order O(h2)\mathcal{O}(h^2).

Do Exercise 1.4.3

Evaluating functions

Numerical errors can cause problems even when performing seemingly simple tasks like evaluating a function. To study this, we consider evaluating a function ff at a point xx and a nearby point x+hx + h. The relative error between the points is given by ex=h/xe_x = |h/x|, and the relative error in the function value is given by ef=f(x+h)f(x)/f(x)e_f = |f(x+h)-f(x)|/|f(x)|. The question is, is the relative error in the function, efe_f, bigger or smaller then the relative error in xx ? If it is bigger, then we call ff ill-conditioned (near xx) because it will blow up small errors. For f(x)=xf(x) = \sqrt{x} near x=1x = 1, for example, we get

x = 1; h = 1e-3; ex = abs(h/x) ef = abs(np.sqrt(x+h) - np.sqrt(x))/abs(np.sqrt(x)) print('error in x : ', ex) print('error in f : ', ef)
error in x : 0.001 error in f : 0.000499875062461

so this function is well-conditioned near x=1x = 1 (why?) Looking at f(x)=tan(x)f(x) = \tan(x) arouns x=π/2x = \pi/2 we find

x = np.pi/2; h = -1e-3; ex = abs(h/x) ef = abs(np.tan(x+h) - np.tan(x))/abs(np.tan(x)) print('error in x : ', ex) print('error in f : ', ef)
error in x : 0.0006366197723675814 error in f : 1.0

making this function function ill-conditioned (near π/2\pi/2). Can you explain why these two functions behave like they do? (Hint: use the fact that hh is small).

Do Exercise 1.4.4

Chapter 2 - Roundoff errors

Consider the following equivalent representations of the same polynomial

f(x)=(x1)(x2)(x20),f(x) = (x-1)(x-2)\ldots (x-20),

and

f(x)=x20210x19+20615x181256850x17+53327946x161672280820x15+40171771630x14756111184500x13+11310276995381x12135585182899530x11+1307535010540395x1010142299865511450x9+63030812099294896x8311333643161390640x7+1206647803780373360x63599979517947607200x5+8037811822645051776x412870931245150988800x3+13803759753640704000x28752948036761600000x+2432902008176640000,f(x) = x^{20} - 210\cdot x^{19} + 20615\cdot x^{18} - 1256850\cdot x^{17} + 53327946\cdot x^{16} - 1672280820\cdot x^{15} + 40171771630\cdot x^{14} - 756111184500\cdot x^{13} + 11310276995381\cdot x^{12} - 135585182899530\cdot x^{11} + 1307535010540395\cdot x^{10} - 10142299865511450\cdot x^{9} + 63030812099294896\cdot x^8 - 311333643161390640\cdot x^7 + 1206647803780373360\cdot x^6 - 3599979517947607200\cdot x^5 + 8037811822645051776\cdot x^4 - 12870931245150988800\cdot x^3 + 13803759753640704000\cdot x^2 - 8752948036761600000\cdot x + 2432902008176640000,

and plot the function using both expressions on the interval [0,21][0,21]. Zoom in around the roots; can you explain the difference between the two graphs?

x = np.linspace(0,21,1000); f = lambda x : x**20 - 210*x**19 + 20615*x**18 - 1256850*x**17 + 53327946*x**16 - 1672280820*x**15 + 40171771630*x**14 - 756111184500*x**13 + 11310276995381*x**12 - 135585182899530*x**11 + 1307535010540395*x**10 - 10142299865511450*x**9 + 63030812099294896*x**8 - 311333643161390640*x**7 + 1206647803780373360*x**6 - 3599979517947607200*x**5 + 8037811822645051776*x**4 - 12870931245150988800*x**3 + 13803759753640704000*x**2 - 8752948036761600000*x + 2432902008176640000 g = lambda x : (x - 1)*(x - 2)*(x - 3)*(x - 4)*(x - 5)*(x - 6)*(x - 7)*(x - 8)*(x - 9)*(x - 10)*(x - 11)*(x - 12)*(x - 13)*(x - 14)*(x - 15)*(x - 16)*(x - 17)*(x - 18)*(x - 19)*(x - 20) plt.plot(x,f(x),x,g(x))
[<matplotlib.lines.Line2D at 0x7ff8c424f898>, <matplotlib.lines.Line2D at 0x7ff8c424fb38>]
Image in a Jupyter notebook

Cancellation errors

In the previous chapter we saw that when computing the derivative with finite differences, cancallation errors caused the approximation error to grow for small step-sizes. In some cases these cancellation errors can be prevented by rewriting the expressions used in the computations

Do Exercise 2.5.{2,11,12(a),13}

Accumulation of roundoff errors and overflow

Another issue is the accumulation of roundoff errors when performing many arithmetic operations or when computing with large numbers. As an example, let's consider computing x=1+nb,x = 1 + n b, y=xnb,y = x - n b, for some given bb and large nn. Of course, we expect y=1y = 1 as final result. We can compute this in three different ways and compare. We'll first use b=1/3b = 1/3 and n=10n = 10.

b = (1.0/3) n = int(1e1) # Method 1 x = 1 + n*b y = x - n*b print(y) # Method 2 x = 1; for _ in range(n): x = x + b y = x; for _ in range(n): y = y - b print(y) # Method 3 x = 1; for _ in range(n): x = x + b x = x - b print(x)
1.0 0.9999999999999991 1.0

Try it again using nn and bb large (for example n=107n = 10^7 and b=1/31010b = 1/3\cdot 10^10), what do you notice? Can you explain this?

Do Exercise 2.5.{12(b), 15, 19}