All published worksheets from http://sagenb.org
Image: ubuntu2004
Floating Point Numbers
This is a very rough sheet containing calculations and explorations for Numerical Analysis in Sage.
For the following command (setting the RR print options) to work, you need to apply the patches at http://trac.sagemath.org/sage_trac/ticket/7682. Alternatively, every time you print a real number m, you can print it as m.str(truncate=False). This is important in this worksheet since we want to see the roundoff error, and not have Sage hide it by showing only correct digits.
First, let's convert 13.28125 to IEEE-754 format.
So here is the IEEE-754 binary representation.
So they agree!
MPFR has a slightly different convention---store an integer.
We can't represent all numbers
A finite system
Let's imagine we have a 2-bit mantissa and a 2-bit exponent. Then all of our machine numbers look like , where and can each be or , and can be .
Question: List all possible numbers in our set.
or as fractions
The distribution of these numbers is interesting. Let's plot them on the real line.
Note that the numbers are unevenly spaced. In fact, count how many numbers are in , and how many are in , and between . What do you notice? Is there a pattern, and why?
Note that there is no machine number representing 1.1. In order to deal with machine arithmetic, the machine needs to decide how to represent 1.1. We notate the machine's version of 1.1 as ( stands for "float", as in "floating point approximation").
What are the reasonable possibilities for ? What would you choose, and why?
The gap around zero is called the "hole at zero". Where does this come from? Subnormal numbers fill this gap in IEEE 754-2008.
What happens if we do on the machine? In other words, what is ?
What is ?
What is , as the machine does it? (first write down all applicable , then calculate the result.)
Rounding Modes
Look up in Wikipedia what the possible rounding modes are for IEEE 754-2008. For each of these rounding modes, what is ? (To answer this for some of the rounding modes, you'll need to know what the mantissas for 3 and 3.5 are.)
Relative Error
What is the relative error of the approximation of and 1.1? (See the definition from last class or at the bottom of p. 19 of the text.)
What is the maximum relative error for a number such that ? This relative error is called the machine epsilon, and is often denoted with the Greek epsilon or .
What is the maximum relative error for a number such that ? What about the maximum relative error for a number such that ? Do you notice a pattern? What about the maximum relative error for a number such that ?
What is the machine epsilon for a 53-bit number? (Work it out by hand, from what you know about the exponent range, and verify the next computation.)
So intuitively, what does it mean if you do several computations using 53-bit precision floating point numbers (i.e., double precision) and the results differ by about , or if you get an answer that is about ?
Some more topics
thinking about , and so is really . So how should we perturb things so that we get the exact answer (backward error analysis), or how much does our computer answer differ from the exact answer (direct error analysis).
significant digits
subtraction
dividing by small number
quadratic formula
horner's form
You can read about two disasters in history coming from ignoring numerical issues here: http://www.ima.umn.edu/~arnold/455.f96/disasters.html