Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Think Stats by Allen B. Downey Think Stats is an introduction to Probability and Statistics for Python programmers.

This is the accompanying code for this book.

Website: http://greenteapress.com/wp/think-stats-2e/

Views: 7115
License: GPL3
Kernel: Python 3

Examples and Exercises from Think Stats, 2nd Edition

http://thinkstats2.com

Copyright 2016 Allen B. Downey

MIT License: https://opensource.org/licenses/MIT

from __future__ import print_function, division %matplotlib inline import numpy as np import random import thinkstats2 import thinkplot

Least squares

One more time, let's load up the NSFG data.

import first live, firsts, others = first.MakeFrames() live = live.dropna(subset=['agepreg', 'totalwgt_lb']) ages = live.agepreg weights = live.totalwgt_lb

The following function computes the intercept and slope of the least squares fit.

from thinkstats2 import Mean, MeanVar, Var, Std, Cov def LeastSquares(xs, ys): meanx, varx = MeanVar(xs) meany = Mean(ys) slope = Cov(xs, ys, meanx, meany) / varx inter = meany - slope * meanx return inter, slope

Here's the least squares fit to birth weight as a function of mother's age.

inter, slope = LeastSquares(ages, weights) inter, slope
(6.8303969733110526, 0.017453851471802753)

The intercept is often easier to interpret if we evaluate it at the mean of the independent variable.

inter + slope * 25
7.2667432601061215

And the slope is easier to interpret if we express it in pounds per decade (or ounces per year).

slope * 10
0.17453851471802753

The following function evaluates the fitted line at the given xs.

def FitLine(xs, inter, slope): fit_xs = np.sort(xs) fit_ys = inter + slope * fit_xs return fit_xs, fit_ys

And here's an example.

fit_xs, fit_ys = FitLine(ages, inter, slope)

Here's a scatterplot of the data with the fitted line.

thinkplot.Scatter(ages, weights, color='blue', alpha=0.1, s=10) thinkplot.Plot(fit_xs, fit_ys, color='white', linewidth=3) thinkplot.Plot(fit_xs, fit_ys, color='red', linewidth=2) thinkplot.Config(xlabel="Mother's age (years)", ylabel='Birth weight (lbs)', axis=[10, 45, 0, 15], legend=False)
Image in a Jupyter notebook

Residuals

The following functon computes the residuals.

def Residuals(xs, ys, inter, slope): xs = np.asarray(xs) ys = np.asarray(ys) res = ys - (inter + slope * xs) return res

Now we can add the residuals as a column in the DataFrame.

live['residual'] = Residuals(ages, weights, inter, slope)

To visualize the residuals, I'll split the respondents into groups by age, then plot the percentiles of the residuals versus the average age in each group.

First I'll make the groups and compute the average age in each group.

bins = np.arange(10, 48, 3) indices = np.digitize(live.agepreg, bins) groups = live.groupby(indices) age_means = [group.agepreg.mean() for _, group in groups][1:-1] age_means
[15.212333333333335, 17.740359281437126, 20.506304824561404, 23.455752212389378, 26.435156146179406, 29.411177432542924, 32.30232530120482, 35.240273631840786, 38.10876470588235, 40.91205882352941]

Next I'll compute the CDF of the residuals in each group.

cdfs = [thinkstats2.Cdf(group.residual) for _, group in groups][1:-1]

The following function plots percentiles of the residuals against the average age in each group.

def PlotPercentiles(age_means, cdfs): thinkplot.PrePlot(3) for percent in [75, 50, 25]: weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs] label = '%dth' % percent thinkplot.Plot(age_means, weight_percentiles, label=label)

The following figure shows the 25th, 50th, and 75th percentiles.

Curvature in the residuals suggests a non-linear relationship.

PlotPercentiles(age_means, cdfs) thinkplot.Config(xlabel="Mother's age (years)", ylabel='Residual (lbs)', xlim=[10, 45])
Image in a Jupyter notebook

Sampling distribution

To estimate the sampling distribution of inter and slope, I'll use resampling.

def SampleRows(df, nrows, replace=False): """Choose a sample of rows from a DataFrame. df: DataFrame nrows: number of rows replace: whether to sample with replacement returns: DataDf """ indices = np.random.choice(df.index, nrows, replace=replace) sample = df.loc[indices] return sample def ResampleRows(df): """Resamples rows from a DataFrame. df: DataFrame returns: DataFrame """ return SampleRows(df, len(df), replace=True)

The following function resamples the given dataframe and returns lists of estimates for inter and slope.

def SamplingDistributions(live, iters=101): t = [] for _ in range(iters): sample = ResampleRows(live) ages = sample.agepreg weights = sample.totalwgt_lb estimates = LeastSquares(ages, weights) t.append(estimates) inters, slopes = zip(*t) return inters, slopes

Here's an example.

inters, slopes = SamplingDistributions(live, iters=1001)

The following function takes a list of estimates and prints the mean, standard error, and 90% confidence interval.

def Summarize(estimates, actual=None): mean = Mean(estimates) stderr = Std(estimates, mu=actual) cdf = thinkstats2.Cdf(estimates) ci = cdf.ConfidenceInterval(90) print('mean, SE, CI', mean, stderr, ci)

Here's the summary for inter.

Summarize(inters)
mean, SE, CI 6.831577849135134 0.0716487265060226 (6.7168573183657, 6.947976272349873)

And for slope.

Summarize(slopes)
mean, SE, CI 0.01740790147806931 0.0028529752707106126 (0.012768537045967305, 0.02193491955982924)

Exercise: Use ResampleRows and generate a list of estimates for the mean birth weight. Use Summarize to compute the SE and CI for these estimates.

# Solution goes here

Visualizing uncertainty

To show the uncertainty of the estimated slope and intercept, we can generate a fitted line for each resampled estimate and plot them on top of each other.

for slope, inter in zip(slopes, inters): fxs, fys = FitLine(age_means, inter, slope) thinkplot.Plot(fxs, fys, color='gray', alpha=0.01) thinkplot.Config(xlabel="Mother's age (years)", ylabel='Residual (lbs)', xlim=[10, 45])
Image in a Jupyter notebook

Or we can make a neater (and more efficient plot) by computing fitted lines and finding percentiles of the fits for each value of the dependent variable.

def PlotConfidenceIntervals(xs, inters, slopes, percent=90, **options): fys_seq = [] for inter, slope in zip(inters, slopes): fxs, fys = FitLine(xs, inter, slope) fys_seq.append(fys) p = (100 - percent) / 2 percents = p, 100 - p low, high = thinkstats2.PercentileRows(fys_seq, percents) thinkplot.FillBetween(fxs, low, high, **options)

This example shows the confidence interval for the fitted values at each mother's age.

PlotConfidenceIntervals(age_means, inters, slopes, percent=90, color='gray', alpha=0.3, label='90% CI') PlotConfidenceIntervals(age_means, inters, slopes, percent=50, color='gray', alpha=0.5, label='50% CI') thinkplot.Config(xlabel="Mother's age (years)", ylabel='Residual (lbs)', xlim=[10, 45])
Image in a Jupyter notebook

Coefficient of determination

The coefficient compares the variance of the residuals to the variance of the dependent variable.

def CoefDetermination(ys, res): return 1 - Var(res) / Var(ys)

For birth weight and mother's age R2R^2 is very small, indicating that the mother's age predicts a small part of the variance in birth weight.

inter, slope = LeastSquares(ages, weights) res = Residuals(ages, weights, inter, slope) r2 = CoefDetermination(weights, res) r2
0.004738115474710258

We can confirm that R2=ρ2R^2 = \rho^2:

print('rho', thinkstats2.Corr(ages, weights)) print('R', np.sqrt(r2))
rho 0.06883397035410908 R 0.06883397035410828

To express predictive power, I think it's useful to compare the standard deviation of the residuals to the standard deviation of the dependent variable, as a measure RMSE if you try to guess birth weight with and without taking into account mother's age.

print('Std(ys)', Std(weights)) print('Std(res)', Std(res))
Std(ys) 1.4082155338406197 Std(res) 1.4048754287857832

As another example of the same idea, here's how much we can improve guesses about IQ if we know someone's SAT scores.

var_ys = 15**2 rho = 0.72 r2 = rho**2 var_res = (1 - r2) * var_ys std_res = np.sqrt(var_res) std_res
10.409610943738484

Hypothesis testing with slopes

Here's a HypothesisTest that uses permutation to test whether the observed slope is statistically significant.

class SlopeTest(thinkstats2.HypothesisTest): def TestStatistic(self, data): ages, weights = data _, slope = thinkstats2.LeastSquares(ages, weights) return slope def MakeModel(self): _, weights = self.data self.ybar = weights.mean() self.res = weights - self.ybar def RunModel(self): ages, _ = self.data weights = self.ybar + np.random.permutation(self.res) return ages, weights

And it is.

ht = SlopeTest((ages, weights)) pvalue = ht.PValue() pvalue
0.0

Under the null hypothesis, the largest slope we observe after 1000 tries is substantially less than the observed value.

ht.actual, ht.MaxTestStat()
(0.017453851471802753, 0.007848978275500441)

We can also use resampling to estimate the sampling distribution of the slope.

sampling_cdf = thinkstats2.Cdf(slopes)

The distribution of slopes under the null hypothesis, and the sampling distribution of the slope under resampling, have the same shape, but one has mean at 0 and the other has mean at the observed slope.

To compute a p-value, we can count how often the estimated slope under the null hypothesis exceeds the observed slope, or how often the estimated slope under resampling falls below 0.

thinkplot.PrePlot(2) thinkplot.Plot([0, 0], [0, 1], color='0.8') ht.PlotCdf(label='null hypothesis') thinkplot.Cdf(sampling_cdf, label='sampling distribution') thinkplot.Config(xlabel='slope (lbs / year)', ylabel='CDF', xlim=[-0.03, 0.03], legend=True, loc='upper left')
Image in a Jupyter notebook

Here's how to get a p-value from the sampling distribution.

pvalue = sampling_cdf[0] pvalue
0

Resampling with weights

Resampling provides a convenient way to take into account the sampling weights associated with respondents in a stratified survey design.

The following function resamples rows with probabilities proportional to weights.

def ResampleRowsWeighted(df, column='finalwgt'): weights = df[column] cdf = thinkstats2.Cdf(dict(weights)) indices = cdf.Sample(len(weights)) sample = df.loc[indices] return sample

We can use it to estimate the mean birthweight and compute SE and CI.

iters = 100 estimates = [ResampleRowsWeighted(live).totalwgt_lb.mean() for _ in range(iters)] Summarize(estimates)
mean, SE, CI 7.348823785682672 0.015312980634443476 (7.324235173710998, 7.373298849302943)

And here's what the same calculation looks like if we ignore the weights.

estimates = [thinkstats2.ResampleRows(live).totalwgt_lb.mean() for _ in range(iters)] Summarize(estimates)
mean, SE, CI 7.2679122870104 0.013850235571584563 (7.2415841447222835, 7.288282805930516)

The difference is non-negligible, which suggests that there are differences in birth weight between the strata in the survey.

Exercises

Exercise: Using the data from the BRFSS, compute the linear least squares fit for log(weight) versus height. How would you best present the estimated parameters for a model like this where one of the variables is log-transformed? If you were trying to guess someone’s weight, how much would it help to know their height?

Like the NSFG, the BRFSS oversamples some groups and provides a sampling weight for each respondent. In the BRFSS data, the variable name for these weights is totalwt. Use resampling, with and without weights, to estimate the mean height of respondents in the BRFSS, the standard error of the mean, and a 90% confidence interval. How much does correct weighting affect the estimates?

Read the BRFSS data and extract heights and log weights.

import brfss df = brfss.ReadBrfss(nrows=None) df = df.dropna(subset=['htm3', 'wtkg2']) heights, weights = df.htm3, df.wtkg2 log_weights = np.log10(weights)

Estimate intercept and slope.

# Solution goes here

Make a scatter plot of the data and show the fitted line.

# Solution goes here

Make the same plot but apply the inverse transform to show weights on a linear (not log) scale.

# Solution goes here

Plot percentiles of the residuals.

# Solution goes here

Compute correlation.

# Solution goes here

Compute coefficient of determination.

# Solution goes here

Confirm that R2=ρ2R^2 = \rho^2.

# Solution goes here

Compute Std(ys), which is the RMSE of predictions that don't use height.

# Solution goes here

Compute Std(res), the RMSE of predictions that do use height.

# Solution goes here

How much does height information reduce RMSE?

# Solution goes here

Use resampling to compute sampling distributions for inter and slope.

# Solution goes here

Plot the sampling distribution of slope.

# Solution goes here

Compute the p-value of the slope.

# Solution goes here

Compute the 90% confidence interval of slope.

# Solution goes here

Compute the mean of the sampling distribution.

# Solution goes here

Compute the standard deviation of the sampling distribution, which is the standard error.

# Solution goes here

Resample rows without weights, compute mean height, and summarize results.

# Solution goes here

Resample rows with weights. Note that the weight column in this dataset is called finalwt.

# Solution goes here