All published worksheets from http://sagenb.org
Image: ubuntu2004
Flávio Codeço Coelho - (http://pyinsci.blogspot.com)
license: Creative Commons non-commercial share-alike
This tutorial is concerned with the exploration of the problem of sampling from high dimensional probability distributions. To run this worksheet you will need to install an additional package into Sage with this command: ./sage -python -m easy_install -U BIP
To illustrate the problem lets start with 2 dimensions, and a bivariate normal distribution.
After some basic imports, let's look at the shape of the bivariate normal with means and equal to zero, standard deviations, and , equal to 1, and no covariance.
We built this bivariate normal contour plot by uniformly sampling along x, y. If we happened to be interested only in samples within 2 standard deviations from the mean, we can easily conclude, by looking at the figure above, that the all the samples coming from the dark blue regions of the picture, would represent wasted sampling effort. So the proportion of wasted effort here would be aproximately the difference between the area of the figure (square) and the area of largest circle we see in the figure divided by the area of the square:
or, in numerical terms:
Approximately 21% of wasted sampling effort.
But what happens to that ratio as we increase the dimensionality of the target distribution?
the volume of the unit hypersphere is given by the following equation:
where n is its dimensionality, is its radius and is the gamma function.
The volume of the circunscribing hypercube, which has edge of length 2, is given by . With these formulas, we can easily calculate the efficiency of sampling from an n-dimensional gaussian.
[removed]
|
As we can see, sampling in multidimensional spaces quickly becomes prohibitively inneficient.
However these results assume uniform sampling along the axes. What if there was a way to concentrate our sampling effort to the higher probability areas? The good news is that there are ways.
The most basic way to do random sampling from a probability distribution works by some form of rejection sampling. This works basically by sampling uniformly from the support of the distribution (in the case of a normal distribution, from to ), and retaining them according to their probability as given by the PDF. Even though the final sample will have less points in the tails, the underlying sampling algorithm still suffers from the curse of dimensionality and is going to be very inneficcient at high dimensionalities.
However more clever implementations of random number generators have succesfully avoided the curse of dimensionality. Let's see how Numpy multivariate normal fares in this category:
As we can see from the time cost to generates a 10000 valid samples increases linearly with dimensionality, and not exponentially as the curse of dimensionality might predict.
But loss of efficiency is not the only ill-effects of the curse, Whenever we sample from a PDF, we want our sample histogram to resemble the PDF we're sampling from. This can be a big problem especially for distributions with complex shapes (e.g. multimodal) in high dimensions.
To addres this particular issue we can resort to a clever sampling technique know as Latin Hypercube Sampling (LHS). This technique basically aims to guarantee that our samples are "well" spread out, guaranteeing better adherence of the sample distribution to the original PDF.
The BIP package has an implementation of LHS in Python which integrates well with Numpy's and scipy's RNGs. Like the multivariate_normal, from Numpy, it can also take into account the correlation structure between the variables of our joint probability distribution, and also allows for each variable have any distribution type supported by Numpy. One difference to be aware of, is that it takes the correlation matrix as an argument and not the covariance.
lets take a look at it:
The figure above contains a scatter plot of a bivariate Normal-Beta distribution havin a correlation of 0.8 between its variables.
Now that we have seen that it works, Let's switch back to our discussion about the Curse, and how LHS can help us to avoid its ill effects. To check if LHS sampling can improve the quality of our sampling, let's go back to multivariate normals, for simplicity sake.
We need a little more code first:
The first function above, generates samples from multivariate normal distributions of a given size and dimensianality. It returns two samples: one samples randomly, and the other sampled through LHS.
The second function above calculates the log-likelihood of the samplesunder a normal model. It returns the likelihood ratio (), which for log-likelihoods, translates into the difference between them.
So let's look at some results:
The plot above makes it clear that with in higher dimensionality, LHS sampling better resembles a multivariate Normal distribution while at lower dimensions it does not give us much advantage over random sampling even at mildly high dimensionalities. In the plot above the smaller the number the better the LHS compared to random sampling.
The plot above indicates that sample size alone, when you fix dimensionality (5D, in this case), does not seem to affect how much improvement in the normality of the sample provided by the LHS method. LHS is still beter at just about all sample sizes tested here (LR < 1).