CoCalc Public Files2015-09-15-214225.ipynbOpen with one click!
Author: AAshi
Views : 132
Description: Jupyter notebook 2015-09-15-214225.ipynb
Compute Environment: Ubuntu 18.04 (Deprecated)

Arwa Ashi

MATH 640 Homework 1:

In [1]:
#Standard import for widgets and plotting import numpy as np import matplotlib.pyplot as plt from math import * from IPython.html import widgets from IPython.html.widgets import interact from IPython.display import display %matplotlib inline # to save plot in the same page
:0: FutureWarning: IPython widgets are experimental and may change in the future.
In [2]:
def plt_xvsy(x,y,title="",color="red",linestyle="dashed",linewidth=2): fig = plt.figure(figsize=(8,6)) axes = fig.add_subplot(111) axes.plot(x,y,color=color,linestyle=linestyle,linewidth=linewidth) axes.set_title(title) axes.grid() axes.set_yscale('log') plt.show()
  1. The Euclidean norm of an n−dimensional vector x is defined by:

x2:=(i=0n1xi2)12=xTx || {\bf x} ||_2 :=\left (\sum_{i=0}^{n-1} x_i^2 \right)^{\frac{1}{2}} = \sqrt{x^T\cdot x}

implement a robust routine for computing this quantity for any given input vector x. Your routine should avoid overflow and harmful underflow.

  • A straight forward implementation potentially will result in errors in the computation.
  • A more robust implementation can be coded.
  • Find vectors that behave differently in the two implementations.
  • Discucc why you coded the robust implementation the way that you did.
In [3]:
n = 9.0 x = np.arange(n) xalt = np.arange(n) #forward implementation def straightForwardNorm(x): # takes things as the are ordered. Nothing special here. norm = 0.0 for k in range(len(x)): norm += x[k]**2.0 return norm**0.5 print ("x =",x) print ("Straight Forward Norm =",straightForwardNorm(x)) #robust implementation def RobustNorm(x): # takes the entries in the given x in a more intelegent manner. # consider changinging how x is ordered norm = 0.0 for k in range(len(x)): xalt = np.arange(n) + 0.0001 norm += xalt[k]**2.0 return norm**0.5 print ("xalt=",xalt) print ("RobustNorm=",RobustNorm(x)) from numpy import linalg as LA print ("LA.norm function = ",LA.norm(x))
('x =', array([ 0., 1., 2., 3., 4., 5., 6., 7., 8.])) ('Straight Forward Norm =', 14.282856857085701) ('xalt=', array([ 0., 1., 2., 3., 4., 5., 6., 7., 8.])) ('RobustNorm=', 14.283108908427463) ('LA.norm function = ', 14.282856857085701)
In [4]:
(LA.norm(x), straightForwardNorm(x), RobustNorm(x) )
(14.282856857085701, 14.282856857085701, 14.283108908427463)
  • Compare both the accuracy and performance of your robust routine with a more straight forward naive implementation.

the robust routine gives the exactly correct result for nearly the Euclidean norm (straight forward naive implementation)

the accuracy of the robust routine is close to the true solution

  • How much performance does the robust routine sacrifice?

the robsust routine is a well-condition since the relative change in the input data causes a resonably commensurate relative change in the solution.

  • Straight Forward Norm = 14.282856857085701
  • RobustNorm = 14.283108908427463
In [5]:
error = abs (straightForwardNorm(x) - RobustNorm(x) ) print (error)
0.000252051341763
  • Can you devise a vector that produces significantly different results from the two routines?

vector x = np.arange(n) + 0.0001 for some k and the exp function will help to reduce the overflow and underflow.

In [ ]:
  1. The polynomial f(x)=(x1)6f(x) = (x−1)^{6} has the value zero at x = 1 and is positive elsewhere. The expanded form of the polynomial:

f(x)=x66x5+15x420x3+15x26x+1f(x) = x^{6} − 6x^{5} + 15x^{4} − 20x^{3} + 15x^{2} − 6x + 1

is mathematically equivalent but may not give the same results numerically.

In [6]:
#Define the function of interest def fold(x): return (x-1)**6 def f(x): return x**(6)+6.00*x**(5)+15.00*x**(4)-20.00*x**(3)+15.00*x**(2)-6.00*x+1.00
  • Compute and plot the values of this polynomial, using each of the two forms, for n equally spaced values on the interval [1−a,1+a], where a ∈ (0,0.01).
  • Document some cases including the case where a = 0.005 and n = 101.
  • Be sure to scale the plot so that the values of x and for the polynomial use the full ranges of their respective axes.
  • Can you explain the behavior? f(x)=(x1)6f(x) = (x−1)^{6} has the value zero at x = 1 and is positive elsewhere. However, f(x)=x66x5+15x420x3+15x26x+1f(x) = x^{6} − 6x^{5} + 15x^{4} − 20x^{3} + 15x^{2} − 6x + 1 has the value zero at x = 0 and it is well-condition.
In [16]:
n = 101 x = np.arange(n) def StudyH(h, a=0.005, n=101,**kwargs): xpoints = np.zeros(n) hpoints = np.zeros(n) hpoints2 = np.zeros(n) for i in range(n): hpoints[i] = fold(x[i]) hpoints2[i] = f(x[i]) xpoints[i] = i plt_xvsy(xpoints,hpoints,title="fold(x)",**kwargs) plt_xvsy(xpoints,hpoints2,title="f(x)",**kwargs) i = interact(StudyH, h = (0.000001,0.5), a = (0.0,0.01), n = (1,250), color = ["red","blue","green","orange"], linestyle = ["solid","dashed"], linewidth = (1,5) )
In [ ]:
  1. Write a program to generate the first n terms in the sequence given by the difference equation:

xk+1=11111303000xk1xk,x_{k+1} = 111 − \frac{ 1130 − \frac{3000}{x_{k−1}}}{x_k},

with starting values x1=112x_1 =\frac{11}{2} and x2=6111x_2 = \frac{61}{11} Explore the value n. The exact solution is a monotonically increasing sequence converging to 6.

In [9]:
n = raw_input("Maximal Number? ") n = int(n) vectorOfxs = np.zeros(n) vectorOfxs[0] = 11.0/2.0 vectorOfxs[1] = 61.0/11.0 for k in range(2,n): vectorOfxs[k] = 111.0 - (1130.0-(3000.0/vectorOfxs[k-2]))/ vectorOfxs[k-1] print vectorOfxs
Maximal Number? 30 [ 5.5 5.54545455 5.59016393 5.63343109 5.67464862 5.71332905 5.74912092 5.78181095 5.81131467 5.83766396 5.86107848 5.88354293 5.93595672 6.53442164 15.41304318 67.47239836 97.13715118 99.82469415 99.98953969 99.99937614 99.99996276 99.99999778 99.99999987 99.99999999 100. 100. 100. 100. 100. 100. ]
  • Document a plot of the sequence values for n > 20. Can you explain the results? The exact solution for the sequence is a monotonically increasing sequence converging to 6 and this happen for k \leq 14. For k > 14 the sequence converges to 100 which can be assume as a roundoff error in the calculation appears to have rapidly amplified and severely contaminated the solution. This is an indication of numerical instabitily.
In [14]:
def StudyH(h, a=0.5, n=30,**kwargs): xpoints = np.zeros(n) hpoints = np.zeros(n) for i in range(n): hpoints[i] = vectorOfxs[i] xpoints[i] = i plt_xvsy(xpoints,hpoints,title="Refinement $i$ Vs. Error",**kwargs) print hpoints i = interact(StudyH, h = (0.000001,0.5), a = (0.0,1.0), n = (1,100), color = ["red","blue","green","orange"], linestyle = ["solid","dashed"], linewidth = (1,5) )
[ 5.5 5.54555455 5.59206473 5.66750175 6.27563478 15.28597846 68.34910194 97.33863967 99.84196766 99.99080429 99.99946327 99.99996855 99.99999815 99.99999989 99.99999999 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. ]

let change the value of x2x_2 from 6111\frac{61}{11} to 6111+0.0001\frac{61}{11}+0.0001

In [11]:
n = raw_input("Maximal Number? ") n = int(n) vectorOfxs = np.zeros(n) vectorOfxs[0] = 11.0/2.0 vectorOfxs[1] = (61.0/11.0) + 0.0001 for k in range(2,n): vectorOfxs[k] = 111.0 - (1130.0-(3000.0/vectorOfxs[k-2]))/ vectorOfxs[k-1] print vectorOfxs
Maximal Number? 30 [ 5.5 5.54555455 5.59206473 5.66750175 6.27563478 15.28597846 68.34910194 97.33863967 99.84196766 99.99080429 99.99946327 99.99996855 99.99999815 99.99999989 99.99999999 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. ]

The solution converge to 100. This implies that the solution is highy sensitive to x1x_1 and x2x_2. So, the mathematical problem is ill-conditioned and the numerical algorithm is unstable.

In [13]:
def StudyH(h, a=0.5, n=30,**kwargs): xpoints = np.zeros(n) hpoints = np.zeros(n) for i in range(n): hpoints[i] = vectorOfxs[i] xpoints[i] = i plt_xvsy(xpoints,hpoints,title="Refinement $i$ Vs. Error",**kwargs) print hpoints i = interact(StudyH, h = (0.000001,0.5), a = (0.0,1.0), n = (1,100), color = ["red","blue","green","orange"], linestyle = ["solid","dashed"], linewidth = (1,5) )
[ 5.5 5.54555455 5.59206473 5.66750175 6.27563478 15.28597846 68.34910194 97.33863967 99.84196766 99.99080429 99.99946327 99.99996855 99.99999815 99.99999989 99.99999999 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. 100. ]
In [ ]: