Think Stats by Allen B. Downey Think Stats is an introduction to Probability and Statistics for Python programmers.
This is the accompanying code for this book.
License: GPL3
Examples and Exercises from Think Stats, 2nd Edition
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
Analytic methods
If we know the parameters of the sampling distribution, we can compute confidence intervals and p-values analytically, which is computationally faster than resampling.
Here's the confidence interval for the estimated mean.
normal.py
provides a Normal
class that encapsulates what we know about arithmetic operations on normal distributions.
We can use it to compute the sampling distribution of the mean.
And then compute a confidence interval.
Central Limit Theorem
If you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution.
The following function generates samples with difference sizes from an exponential distribution.
This function generates normal probability plots for samples with various sizes.
The following plot shows how the sum of exponential variates converges to normal as sample size increases.
The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.
The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.
If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.
To generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.
Difference in means
Let's use analytic methods to compute a CI and p-value for an observed difference in means.
The distribution of pregnancy length is not normal, but it has finite mean and variance, so the sum (or mean) of a few thousand samples is very close to normal.
The following function computes the sampling distribution of the mean for a set of values and a given sample size.
Here are the sampling distributions for the means of the two groups under the null hypothesis.
And the sampling distribution for the difference in means.
Under the null hypothesis, here's the chance of exceeding the observed difference.
And the chance of falling below the negated difference.
The sum of these probabilities is the two-sided p-value.
Testing a correlation
Under the null hypothesis (that there is no correlation), the sampling distribution of the observed correlation (suitably transformed) is a "Student t" distribution.
The following is a HypothesisTest
that uses permutation to estimate the sampling distribution of a correlation.
Now we can estimate the sampling distribution by permutation and compare it to the Student t distribution.
That confirms the analytic result. Now we can use the CDF of the Student t distribution to compute a p-value.
Chi-squared test
The reason the chi-squared statistic is useful is that we can compute its distribution under the null hypothesis analytically.
Again, we can confirm the analytic result by comparing values generated by simulation with the analytic distribution.
And then we can use the analytic distribution to compute p-values.
Exercises
Exercise: In Section 5.4, we saw that the distribution of adult weights is approximately lognormal. One possible explanation is that the weight a person gains each year is proportional to their current weight. In that case, adult weight is the product of a large number of multiplicative factors:
w = w0 f1 f2 ... fn
where w is adult weight, w0 is birth weight, and fi is the weight gain factor for year i.
The log of a product is the sum of the logs of the factors:
logw = logw0 + logf1 + logf2 + ... + logfn
So by the Central Limit Theorem, the distribution of logw is approximately normal for large n, which implies that the distribution of w is lognormal.
To model this phenomenon, choose a distribution for f that seems reasonable, then generate a sample of adult weights by choosing a random value from the distribution of birth weights, choosing a sequence of factors from the distribution of f, and computing the product. What value of n is needed to converge to a lognormal distribution?
Exercise: In Section 14.6 we used the Central Limit Theorem to find the sampling distribution of the difference in means, δ, under the null hypothesis that both samples are drawn from the same population.
We can also use this distribution to find the standard error of the estimate and confidence intervals, but that would only be approximately correct. To be more precise, we should compute the sampling distribution of δ under the alternate hypothesis that the samples are drawn from different populations.
Compute this distribution and use it to calculate the standard error and a 90% confidence interval for the difference in means.
Exercise: In a recent paper, Stein et al. investigate the effects of an intervention intended to mitigate gender-stereotypical task allocation within student engineering teams.
Before and after the intervention, students responded to a survey that asked them to rate their contribution to each aspect of class projects on a 7-point scale.
Before the intervention, male students reported higher scores for the programming aspect of the project than female students; on average men reported a score of 3.57 with standard error 0.28. Women reported 1.91, on average, with standard error 0.32.
Compute the sampling distribution of the gender gap (the difference in means), and test whether it is statistically significant. Because you are given standard errors for the estimated means, you don’t need to know the sample size to figure out the sampling distributions.
After the intervention, the gender gap was smaller: the average score for men was 3.44 (SE 0.16); the average score for women was 3.18 (SE 0.16). Again, compute the sampling distribution of the gender gap and test it.
Finally, estimate the change in gender gap; what is the sampling distribution of this change, and is it statistically significant?