All published worksheets from http://sagenb.org
Image: ubuntu2004
Simulating Random Variables Using a Uniform Distribution
As discussed briefly in the book (and in more detail in notes), we can use the built-in random function, available in most programming languages, to simulate other random variables as well. We will need to know the density (or some piece of information that gives us the density) of the random variable to be simulated.
First, let's take a look at the random function.
The random function returns a pseudorandom sample from a Uniform(0,1) distribution. Let's use it to simulate some other random variables. Recall that the idea is thatwhere is the new random variable, is the Uniform(0,1) random variable, and is the distribution function for .
Example 1:
Suppose is the Uniform(-2,2) random variable. ThenWe can invert this function algebraically:Thus our code will be pretty simple.
Let's take a look at a sample of our random variable.
This seems to be working pretty well! Let's move on to our more complicated examples.
Example 2:
Suppose that is the Normal(0,1) random variable. The density function isand can be implemented as:
We then tabulate the density and distribution functions:
The first few values of FY are...
Now we construct the code to draw a sample value from the normal distribution.
We can draw a few values and look at them.
It is hard to say from this small sample whether the data is approximately normal. So, let's draw a much larger sample - N=5000 - and construct a frequency plot for (rounded) values.
This seems to be producing a sample with a distribution tolerably close to a normal distribution. If we needed to use this code in applications, we would want to (a) design it to be much faster, and (b) test it more rigorously.
Example 3:
Now let's consider a discrete random variable. We can define the density function through a simple list of probabilities.
(Note that this is truly a density function, since the probabilities sum to 1.)
Construct the distribution:
This distribution function is then
As for the normal random variable, we now define our discrete random variable with a simple loop.
We can look at a small sample...
...and we can look at a large sample to see whether we have (approximately) the correct distribution.
Compare the sampled results,
to the given probability density:
We are not far off at all!
It is worth noting that SAGE (and the Python language) incorporate some nice tools for manipulating discrete random variables. In this case, the choice function works particularly well.
Thus we can avoid computing our distribution and coding the loop that steps through and looks at values.
Section 6.4: The Method of Transformations
Suppose that is a given random variable with known density function , and define , where is a function. Then we would like to find the density of . On page 316 of our text, the authors outline the method of transformations. While this method can be useful, it suffers from one flaw: we must be able to algebraically solve for in order to find a formula for the inverse function .
For example, suppose that is the uniform random variable Uniform(1,2), and suppose that is the function defined byThen, plotting this function on the support of indicates that it satisfies the monotonicity requirements of the method of transformations:
In order to apply the method of transformations to find , we need to solvefor . This is, algebraically speaking, an ugly problem.
Instead of approaching this symbolically, we can try to approach it numerically. The method we demonstrate here is fairly elementary, and based on evenly spaced samples of the function .
So, let's begin by tabulating on the support of .
We can now use these tables to construct the density function for .
This is a fairly elementary example (in particular, the fact that the density of is constant on its support makes things quite a bit simpler). However, it indicates how more complex examples can be approached numerically, using the straightforward approach of function tabulation together with a difference quotient estimate of the derivative.