Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Think Stats by Allen B. Downey Think Stats is an introduction to Probability and Statistics for Python programmers.

This is the accompanying code for this book.

Website: http://greenteapress.com/wp/think-stats-2e/

Views: 7120
License: GPL3
Kernel: Python 3

Examples and Exercises from Think Stats, 2nd Edition

http://thinkstats2.com

Copyright 2016 Allen B. Downey

MIT License: https://opensource.org/licenses/MIT

from __future__ import print_function, division %matplotlib inline import numpy as np import pandas as pd import random import thinkstats2 import thinkplot

Analytic methods

If we know the parameters of the sampling distribution, we can compute confidence intervals and p-values analytically, which is computationally faster than resampling.

import scipy.stats def EvalNormalCdfInverse(p, mu=0, sigma=1): return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)

Here's the confidence interval for the estimated mean.

EvalNormalCdfInverse(0.05, mu=90, sigma=2.5)
85.88786593262132
EvalNormalCdfInverse(0.95, mu=90, sigma=2.5)
94.11213406737868

normal.py provides a Normal class that encapsulates what we know about arithmetic operations on normal distributions.

from normal import Normal dist = Normal(90, 7.5**2) dist
Normal(90, 56.25)

We can use it to compute the sampling distribution of the mean.

dist_xbar = dist.Sum(9) / 9 dist_xbar.sigma
2.5

And then compute a confidence interval.

dist_xbar.Percentile(5), dist_xbar.Percentile(95)
(85.88786593262132, 94.11213406737868)

Central Limit Theorem

If you add up independent variates from a distribution with finite mean and variance, the sum converges on a normal distribution.

The following function generates samples with difference sizes from an exponential distribution.

def MakeExpoSamples(beta=2.0, iters=1000): """Generates samples from an exponential distribution. beta: parameter iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(np.random.exponential(beta, n)) for _ in range(iters)] samples.append((n, sample)) return samples

This function generates normal probability plots for samples with various sizes.

def NormalPlotSamples(samples, plot=1, ylabel=''): """Makes normal probability plots for samples. samples: list of samples label: string """ for n, sample in samples: thinkplot.SubPlot(plot) thinkstats2.NormalProbabilityPlot(sample) thinkplot.Config(title='n=%d' % n, legend=False, xticks=[], yticks=[], xlabel='random normal variate', ylabel=ylabel) plot += 1

The following plot shows how the sum of exponential variates converges to normal as sample size increases.

thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeExpoSamples() NormalPlotSamples(samples, plot=1, ylabel='sum of expo values')
/home/downey/anaconda3/envs/ModSimPy/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. warnings.warn(message, mplDeprecation, stacklevel=1)
Image in a Jupyter notebook

The lognormal distribution has higher variance, so it requires a larger sample size before it converges to normal.

def MakeLognormalSamples(mu=1.0, sigma=1.0, iters=1000): """Generates samples from a lognormal distribution. mu: parmeter sigma: parameter iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(np.random.lognormal(mu, sigma, n)) for _ in range(iters)] samples.append((n, sample)) return samples
thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeLognormalSamples() NormalPlotSamples(samples, ylabel='sum of lognormal values')
/home/downey/anaconda3/envs/ModSimPy/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. warnings.warn(message, mplDeprecation, stacklevel=1)
Image in a Jupyter notebook

The Pareto distribution has infinite variance, and sometimes infinite mean, depending on the parameters. It violates the requirements of the CLT and does not generally converge to normal.

def MakeParetoSamples(alpha=1.0, iters=1000): """Generates samples from a Pareto distribution. alpha: parameter iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(np.random.pareto(alpha, n)) for _ in range(iters)] samples.append((n, sample)) return samples
thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeParetoSamples() NormalPlotSamples(samples, ylabel='sum of Pareto values')
/home/downey/anaconda3/envs/ModSimPy/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. warnings.warn(message, mplDeprecation, stacklevel=1)
Image in a Jupyter notebook

If the random variates are correlated, that also violates the CLT, so the sums don't generally converge.

To generate correlated values, we generate correlated normal values and then transform to whatever distribution we want.

def GenerateCorrelated(rho, n): """Generates a sequence of correlated values from a standard normal dist. rho: coefficient of correlation n: length of sequence returns: iterator """ x = random.gauss(0, 1) yield x sigma = np.sqrt(1 - rho**2) for _ in range(n-1): x = random.gauss(x * rho, sigma) yield x
def GenerateExpoCorrelated(rho, n): """Generates a sequence of correlated values from an exponential dist. rho: coefficient of correlation n: length of sequence returns: NumPy array """ normal = list(GenerateCorrelated(rho, n)) uniform = scipy.stats.norm.cdf(normal) expo = scipy.stats.expon.ppf(uniform) return expo
def MakeCorrelatedSamples(rho=0.9, iters=1000): """Generates samples from a correlated exponential distribution. rho: correlation iters: number of samples to generate for each size returns: list of samples """ samples = [] for n in [1, 10, 100]: sample = [np.sum(GenerateExpoCorrelated(rho, n)) for _ in range(iters)] samples.append((n, sample)) return samples
thinkplot.PrePlot(num=3, rows=2, cols=3) samples = MakeCorrelatedSamples() NormalPlotSamples(samples, ylabel='sum of correlated exponential values')
/home/downey/anaconda3/envs/ModSimPy/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance. warnings.warn(message, mplDeprecation, stacklevel=1)
Image in a Jupyter notebook

Difference in means

Let's use analytic methods to compute a CI and p-value for an observed difference in means.

The distribution of pregnancy length is not normal, but it has finite mean and variance, so the sum (or mean) of a few thousand samples is very close to normal.

import first live, firsts, others = first.MakeFrames() delta = firsts.prglngth.mean() - others.prglngth.mean() delta
0.07803726677754952

The following function computes the sampling distribution of the mean for a set of values and a given sample size.

def SamplingDistMean(data, n): """Computes the sampling distribution of the mean. data: sequence of values representing the population n: sample size returns: Normal object """ mean, var = data.mean(), data.var() dist = Normal(mean, var) return dist.Sum(n) / n

Here are the sampling distributions for the means of the two groups under the null hypothesis.

dist1 = SamplingDistMean(live.prglngth, len(firsts)) dist2 = SamplingDistMean(live.prglngth, len(others))

And the sampling distribution for the difference in means.

dist_diff = dist1 - dist2 dist
Normal(90, 56.25)

Under the null hypothesis, here's the chance of exceeding the observed difference.

1 - dist_diff.Prob(delta)
0.08377070425543787

And the chance of falling below the negated difference.

dist_diff.Prob(-delta)
0.08377070425543781

The sum of these probabilities is the two-sided p-value.

Testing a correlation

Under the null hypothesis (that there is no correlation), the sampling distribution of the observed correlation (suitably transformed) is a "Student t" distribution.

def StudentCdf(n): """Computes the CDF correlations from uncorrelated variables. n: sample size returns: Cdf """ ts = np.linspace(-3, 3, 101) ps = scipy.stats.t.cdf(ts, df=n-2) rs = ts / np.sqrt(n - 2 + ts**2) return thinkstats2.Cdf(rs, ps)

The following is a HypothesisTest that uses permutation to estimate the sampling distribution of a correlation.

import hypothesis class CorrelationPermute(hypothesis.CorrelationPermute): """Tests correlations by permutation.""" def TestStatistic(self, data): """Computes the test statistic. data: tuple of xs and ys """ xs, ys = data return np.corrcoef(xs, ys)[0][1]

Now we can estimate the sampling distribution by permutation and compare it to the Student t distribution.

def ResampleCorrelations(live): """Tests the correlation between birth weight and mother's age. live: DataFrame for live births returns: sample size, observed correlation, CDF of resampled correlations """ live2 = live.dropna(subset=['agepreg', 'totalwgt_lb']) data = live2.agepreg.values, live2.totalwgt_lb.values ht = CorrelationPermute(data) p_value = ht.PValue() return len(live2), ht.actual, ht.test_cdf
n, r, cdf = ResampleCorrelations(live) model = StudentCdf(n) thinkplot.Plot(model.xs, model.ps, color='gray', alpha=0.5, label='Student t') thinkplot.Cdf(cdf, label='sample') thinkplot.Config(xlabel='correlation', ylabel='CDF', legend=True, loc='lower right')
Image in a Jupyter notebook

That confirms the analytic result. Now we can use the CDF of the Student t distribution to compute a p-value.

t = r * np.sqrt((n-2) / (1-r**2)) p_value = 1 - scipy.stats.t.cdf(t, df=n-2) print(r, p_value)
0.06883397035410904 2.861466619208386e-11

Chi-squared test

The reason the chi-squared statistic is useful is that we can compute its distribution under the null hypothesis analytically.

def ChiSquaredCdf(n): """Discrete approximation of the chi-squared CDF with df=n-1. n: sample size returns: Cdf """ xs = np.linspace(0, 25, 101) ps = scipy.stats.chi2.cdf(xs, df=n-1) return thinkstats2.Cdf(xs, ps)

Again, we can confirm the analytic result by comparing values generated by simulation with the analytic distribution.

data = [8, 9, 19, 5, 8, 11] dt = hypothesis.DiceChiTest(data) p_value = dt.PValue(iters=1000) n, chi2, cdf = len(data), dt.actual, dt.test_cdf model = ChiSquaredCdf(n) thinkplot.Plot(model.xs, model.ps, color='gray', alpha=0.3, label='chi squared') thinkplot.Cdf(cdf, label='sample') thinkplot.Config(xlabel='chi-squared statistic', ylabel='CDF', loc='lower right')
Image in a Jupyter notebook

And then we can use the analytic distribution to compute p-values.

p_value = 1 - scipy.stats.chi2.cdf(chi2, df=n-1) print(chi2, p_value)
11.6 0.04069938850404997

Exercises

Exercise: In Section 5.4, we saw that the distribution of adult weights is approximately lognormal. One possible explanation is that the weight a person gains each year is proportional to their current weight. In that case, adult weight is the product of a large number of multiplicative factors:

w = w0 f1 f2 ... fn

where w is adult weight, w0 is birth weight, and fi is the weight gain factor for year i.

The log of a product is the sum of the logs of the factors:

logw = logw0 + logf1 + logf2 + ... + logfn

So by the Central Limit Theorem, the distribution of logw is approximately normal for large n, which implies that the distribution of w is lognormal.

To model this phenomenon, choose a distribution for f that seems reasonable, then generate a sample of adult weights by choosing a random value from the distribution of birth weights, choosing a sequence of factors from the distribution of f, and computing the product. What value of n is needed to converge to a lognormal distribution?

# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here

Exercise: In Section 14.6 we used the Central Limit Theorem to find the sampling distribution of the difference in means, δ, under the null hypothesis that both samples are drawn from the same population.

We can also use this distribution to find the standard error of the estimate and confidence intervals, but that would only be approximately correct. To be more precise, we should compute the sampling distribution of δ under the alternate hypothesis that the samples are drawn from different populations.

Compute this distribution and use it to calculate the standard error and a 90% confidence interval for the difference in means.

# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here

Exercise: In a recent paper, Stein et al. investigate the effects of an intervention intended to mitigate gender-stereotypical task allocation within student engineering teams.

Before and after the intervention, students responded to a survey that asked them to rate their contribution to each aspect of class projects on a 7-point scale.

Before the intervention, male students reported higher scores for the programming aspect of the project than female students; on average men reported a score of 3.57 with standard error 0.28. Women reported 1.91, on average, with standard error 0.32.

Compute the sampling distribution of the gender gap (the difference in means), and test whether it is statistically significant. Because you are given standard errors for the estimated means, you don’t need to know the sample size to figure out the sampling distributions.

After the intervention, the gender gap was smaller: the average score for men was 3.44 (SE 0.16); the average score for women was 3.18 (SE 0.16). Again, compute the sampling distribution of the gender gap and test it.

Finally, estimate the change in gender gap; what is the sampling distribution of this change, and is it statistically significant?

# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here