Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Think Stats by Allen B. Downey Think Stats is an introduction to Probability and Statistics for Python programmers.

This is the accompanying code for this book.

Website: http://greenteapress.com/wp/think-stats-2e/

Views: 7115
License: GPL3
Kernel: Python 3

Examples and Exercises from Think Stats, 2nd Edition

http://thinkstats2.com

Copyright 2016 Allen B. Downey

MIT License: https://opensource.org/licenses/MIT

from __future__ import print_function, division %matplotlib inline import numpy as np import random import thinkstats2 import thinkplot

Hypothesis testing

The following is a version of thinkstats2.HypothesisTest with just the essential methods:

class HypothesisTest(object): def __init__(self, data): self.data = data self.MakeModel() self.actual = self.TestStatistic(data) def PValue(self, iters=1000): self.test_stats = [self.TestStatistic(self.RunModel()) for _ in range(iters)] count = sum(1 for x in self.test_stats if x >= self.actual) return count / iters def TestStatistic(self, data): raise UnimplementedMethodException() def MakeModel(self): pass def RunModel(self): raise UnimplementedMethodException()

And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.

class CoinTest(HypothesisTest): def TestStatistic(self, data): heads, tails = data test_stat = abs(heads - tails) return test_stat def RunModel(self): heads, tails = self.data n = heads + tails sample = [random.choice('HT') for _ in range(n)] hist = thinkstats2.Hist(sample) data = hist['H'], hist['T'] return data

The p-value turns out to be about 7%, which is considered on the border of statistical significance.

ct = CoinTest((140, 110)) pvalue = ct.PValue() pvalue
0.065

Permutation test

To compute the p-value of an observed difference in means, we can assume that there is no difference between the groups and generate simulated results by shuffling the data.

class DiffMeansPermute(thinkstats2.HypothesisTest): def TestStatistic(self, data): group1, group2 = data test_stat = abs(group1.mean() - group2.mean()) return test_stat def MakeModel(self): group1, group2 = self.data self.n, self.m = len(group1), len(group2) self.pool = np.hstack((group1, group2)) def RunModel(self): np.random.shuffle(self.pool) data = self.pool[:self.n], self.pool[self.n:] return data

Here's an example where we test the observed difference in pregnancy length for first babies and others.

import first live, firsts, others = first.MakeFrames() data = firsts.prglngth.values, others.prglngth.values

The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.

ht = DiffMeansPermute(data) pvalue = ht.PValue() pvalue
0.17

Here's the distrubution of the test statistic (the difference in means) over many simulated samples:

ht.PlotCdf() thinkplot.Config(xlabel='test statistic', ylabel='CDF')
Image in a Jupyter notebook

Under the null hypothesis, we often see differences bigger than the observed difference.

class DiffMeansOneSided(DiffMeansPermute): def TestStatistic(self, data): group1, group2 = data test_stat = group1.mean() - group2.mean() return test_stat

If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.

ht = DiffMeansOneSided(data) pvalue = ht.PValue() pvalue
0.076

But in this example, the result is still not statistically significant.

Difference in standard deviation

In this framework, it is easy to use other test statistics. For example, if we think the variance for first babies might be higher, we can run this test:

class DiffStdPermute(DiffMeansPermute): def TestStatistic(self, data): group1, group2 = data test_stat = group1.std() - group2.std() return test_stat
ht = DiffStdPermute(data) pvalue = ht.PValue() pvalue
0.1

But that's not statistically significant either.

Testing correlation

To check whether an observed correlation is statistically significant, we can run a permutation test with a different test statistic.

class CorrelationPermute(thinkstats2.HypothesisTest): def TestStatistic(self, data): xs, ys = data test_stat = abs(thinkstats2.Corr(xs, ys)) return test_stat def RunModel(self): xs, ys = self.data xs = np.random.permutation(xs) return xs, ys

Here's an example testing the correlation between birth weight and mother's age.

cleaned = live.dropna(subset=['agepreg', 'totalwgt_lb']) data = cleaned.agepreg.values, cleaned.totalwgt_lb.values ht = CorrelationPermute(data) pvalue = ht.PValue() pvalue
0.0

The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than 1/10001/1000, but it is not actually 0.

To get a sense of how unexpected the observed value is under the null hypothesis, we can compare the actual correlation to the largest value we saw in the simulations.

ht.actual, ht.MaxTestStat()
(0.06883397035410908, 0.03425493926694762)

Testing proportions

Here's an example that tests whether the outcome of a rolling a six-sided die is suspicious, where the test statistic is the total absolute difference between the observed outcomes and the expected long-term averages.

class DiceTest(thinkstats2.HypothesisTest): def TestStatistic(self, data): observed = data n = sum(observed) expected = np.ones(6) * n / 6 test_stat = sum(abs(observed - expected)) return test_stat def RunModel(self): n = sum(self.data) values = [1, 2, 3, 4, 5, 6] rolls = np.random.choice(values, n, replace=True) hist = thinkstats2.Hist(rolls) freqs = hist.Freqs(values) return freqs

Here's an example using the data from the book:

data = [8, 9, 19, 5, 8, 11] dt = DiceTest(data) pvalue = dt.PValue(iters=10000) pvalue
0.1293

The observed deviance from the expected values is not statistically significant.

By convention, it is more common to test data like this using the chi-squared statistic:

class DiceChiTest(DiceTest): def TestStatistic(self, data): observed = data n = sum(observed) expected = np.ones(6) * n / 6 test_stat = sum((observed - expected)**2 / expected) return test_stat

Using this test, we get a smaller p-value:

dt = DiceChiTest(data) pvalue = dt.PValue(iters=10000) pvalue
0.0406

Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.

Chi-square test of pregnancy length

class PregLengthTest(thinkstats2.HypothesisTest): def MakeModel(self): firsts, others = self.data self.n = len(firsts) self.pool = np.hstack((firsts, others)) pmf = thinkstats2.Pmf(self.pool) self.values = range(35, 44) self.expected_probs = np.array(pmf.Probs(self.values)) def RunModel(self): np.random.shuffle(self.pool) data = self.pool[:self.n], self.pool[self.n:] return data def TestStatistic(self, data): firsts, others = data stat = self.ChiSquared(firsts) + self.ChiSquared(others) return stat def ChiSquared(self, lengths): hist = thinkstats2.Hist(lengths) observed = np.array(hist.Freqs(self.values)) expected = self.expected_probs * len(lengths) stat = sum((observed - expected)**2 / expected) return stat

If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.

data = firsts.prglngth.values, others.prglngth.values ht = PregLengthTest(data) p_value = ht.PValue() print('p-value =', p_value) print('actual =', ht.actual) print('ts max =', ht.MaxTestStat())
p-value = 0.0 actual = 101.50141482893264 ts max = 26.19578807986594

Power

Here's the function that estimates the probability of a non-significant p-value even is there really is a difference between the groups.

def FalseNegRate(data, num_runs=1000): """Computes the chance of a false negative based on resampling. data: pair of sequences num_runs: how many experiments to simulate returns: float false negative rate """ group1, group2 = data count = 0 for i in range(num_runs): sample1 = thinkstats2.Resample(group1) sample2 = thinkstats2.Resample(group2) ht = DiffMeansPermute((sample1, sample2)) p_value = ht.PValue(iters=101) if p_value > 0.05: count += 1 return count / num_runs
neg_rate = FalseNegRate(data) neg_rate
0.687

In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.

Exercises

Exercise: As sample size increases, the power of a hypothesis test increases, which means it is more likely to be positive if the effect is real. Conversely, as sample size decreases, the test is less likely to be positive even if the effect is real.

To investigate this behavior, run the tests in this chapter with different subsets of the NSFG data. You can use thinkstats2.SampleRows to select a random subset of the rows in a DataFrame.

What happens to the p-values of these tests as sample size decreases? What is the smallest sample size that yields a positive test?

# Solution goes here
# Solution goes here
# Solution goes here

Exercise: In Section 9.3, we simulated the null hypothesis by permutation; that is, we treated the observed values as if they represented the entire population, and randomly assigned the members of the population to the two groups.

An alternative is to use the sample to estimate the distribution for the population, then draw a random sample from that distribution. This process is called resampling. There are several ways to implement resampling, but one of the simplest is to draw a sample with replacement from the observed values, as in Section 9.10.

Write a class named DiffMeansResample that inherits from DiffMeansPermute and overrides RunModel to implement resampling, rather than permutation.

Use this model to test the differences in pregnancy length and birth weight. How much does the model affect the results?

# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here