#computes the set of divided differences
def DividedDifferences(points):
#we compute n from the size of the array
n = len(points)-1
#create an array n+1 by n+1
divDiff = [[0 for i in range(n+1)] for i in range(n+1)]
#first column will contain the y values
#this is column 0!
for row in [0,1,2,..,n]:
divDiff[row][0] = points[row][1]
#compute, recursively the div differences
#one column at a time
#in second column will have f[.]
#in third column will have f[.,.], etc
for column in [1,2,..,n]:
#for a column, compute the value in each row
for row in [0,1, 2, .., n-column]:
divDiff[row][column] = ((divDiff[row+1][column-1] - divDiff[row][column-1])
/ (points[row+column][0] - points[row][0]))
return divDiff
def printDividedDifferences(points):
#we compute n from the size of the array
n = len(points)-1
divDiff = DividedDifferences(points)
for row in [0,1, 2, .., n]:
print points[row][0],"\t|", [divDiff[row][column] for column in [0,1,2,..,n-row]]
#compute the Newton interpolation polynomial
def Newton_Interpolation(x, points):
#we compute n from the size of the array
n = len(points) - 1
#the very first step is computed here
#p0(x)=y0
p(x) = points[0][1]
#new term will be x-x0
#it will eventually store (xk-x0)(xk-x1)...(xk-xk-1)
newTerm(x) = x - points[0][0]
for k in range(1,n+1):
#we repeatedly add the new data point
#first compute the new c
c = ((points[k][1] - p(points[k][0])) / newTerm(points[k][0]))
#and then add it to the interpolating function
p(x) = p(x) + c*newTerm(x)
#prepare for next iteration
newTerm(x) = newTerm(x) * (x-points[k][0])
return p
points = [[-2, 1],[-1, 4],[0, 11],[1,16],[2,13],[3,-4]]
printDividedDifferences(points)
print
print "Create Newton interpolation polynomial and check results"
print
#call the function and find the final answer
f(x) = Newton_Interpolation(x, points)
print "Interpolating polynomial f(x) = ", f(x)
print "f([points]) = ", [f(pt[0]) for pt in points]
print
We get a cubic interpolating polynomial from the difference table.
This divided difference table when augmented with two additional data points, shows 0 coefficients for higher power terms
and so the interpolating polynomial remains unchanged as a cubic.
Linear interpolation in a table of function values means the following: if $y_0 = f(x_0)$ and $y_1 = f(x_1)$ are tabulated values, and if $x_0 < x < x_1$, then an interpolated value of $f(x)$ is \begin{equation*} y_0 + \frac{y_1 − y_0}{ x_1 − x_0} (x − x_0). \end{equation*}
How accurately can we determine $\sin x$ by linear interpolation, given a table of $\sin x$ to 10 decimal places for $x \in [0, 2]$, with $h = 0.01$?
n = 1
M = 1
h = 0.01
1/(4*(n+1))*M*h^(n+1)
Let $f$ be a function such that $f^{(n+1)}$ is continuous on $[a, b]$ and satisfies $|f^{(n+1)}(x)| \leq M$. Let $p$ be the polynomial of degree $\leq n$ that interpolates $f$ at $n + 1$ equally spaced nodes in $[a, b]$, including the endpoints. Then on $[a, b]$, \begin{align} |f(x) − p(x)| \leq \frac{1}{4(n + 1)} Mh^{n+1} \end{align} where h = (b − a)/n is the spacing between nodes.
Linear interpolation is used for each segment therefore we have $n+1=2$ points and with $h=0.01$ interval width the accuracy is determined by \begin{align*} \frac{1}{4(n+1)} M h^{(n+1)} = \frac{1}{8}1\cdot0.01^2 = 1.25\times10^{-5}\ \end{align*}
Setting $M = 1$ is justified by trigonometric analysis since all derivatives of $\sin$ or $\cos$ are themselves $\sin$ or $\cos$ trig functions
and hence bounded by 1.
The maximum accuracy of $\sin x$ given by this linear interpolation is $1.25\times10^{-5}$.
n = 1
M = 1
h = sqrt(8 * .5 * 10^-5.)
N = ceil(pi.n(100) / h); N
This analysis is based on the same error estimation formula used in Exercise 2.
Five decimal places of accuracy means the error must be $\leq \frac{1}{2} \times 10^{-5}$.
Linear interpolation is used for each segment so we have $n+1=2$ points.
Therefore,
The minimum number of entries needed is 497.
Averaging the forward-difference formula $f'(x) ≈ [f(x + h) − f(x)]/h$ and the backward difference formula $f'(x) ≈ [f(x) − f(x − h)]/h$, each with error term $O(h)$, results in the central-difference formula $f'(x) ≈ [f(x + h) − f(x − h)]/(2h)$, with error term $O(h^2)$. Show why Hint: Determine at least the first term in the error seriesfor each formula.
Criticize the following analysis. By Taylor’s formula, we have
\begin{align} f(x + h) − f(x) &= hf'(x) + \frac{h^2}{2}f''(x) + \frac{h^3}{6}f'''(\xi)\\ f(x - h) − f(x) &=-hf'(x) + \frac{h^2}{2}f''(x) - \frac{h^3}{6}f'''(\xi)\\ \end{align}So by adding, it seems that we obtain an $exact(!)$ expression for $f''(x)$: $$ f(x + h) + f(x − h) − 2f(x) = h^2 f''(x). $$ Is this correct?
No. The formulas are incomplete. The correct formulas are: \begin{align} f(x + h) − f(x) &= hf'(x) + \frac{h^2}{2}f''(x) + \frac{h^3}{6}f'''(\xi_1) && \text{ where } \xi_1 \text { is between $x$ and $x+h$ } \ f(x - h) − f(x) &=-hf'(x) + \frac{h^2}{2}f''(x) - \frac{h^3}{6}f'''(\xi_2) && \text{ where } \xi_2 \text { is between $x-h$ and $h$ }\ f''(x)&\approx \left[f(x + h) + f(x − h) − 2f(x)\right] / h^2 && \text { $\mathbf { approximate }$ $f'(x)$ with error term $O(h^3)$. } \ \end{align}