Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News Sign UpSign In
| Download

Think Stats by Allen B. Downey Think Stats is an introduction to Probability and Statistics for Python programmers.

This is the accompanying code for this book.

Website: http://greenteapress.com/wp/think-stats-2e/

Views: 7088
License: GPL3
Kernel: Python 3

Examples and Exercises from Think Stats, 2nd Edition

http://thinkstats2.com

Copyright 2016 Allen B. Downey

MIT License: https://opensource.org/licenses/MIT

from __future__ import print_function, division %matplotlib inline import numpy as np import brfss import thinkstats2 import thinkplot

Scatter plots

I'll start with the data from the BRFSS again.

df = brfss.ReadBrfss(nrows=None)

The following function selects a random subset of a DataFrame.

def SampleRows(df, nrows, replace=False): indices = np.random.choice(df.index, nrows, replace=replace) sample = df.loc[indices] return sample

I'll extract the height in cm and the weight in kg of the respondents in the sample.

sample = SampleRows(df, 5000) heights, weights = sample.htm3, sample.wtkg2

Here's a simple scatter plot with alpha=1, so each data point is fully saturated.

thinkplot.Scatter(heights, weights, alpha=1) thinkplot.Config(xlabel='Height (cm)', ylabel='Weight (kg)', axis=[140, 210, 20, 200], legend=False)
Image in a Jupyter notebook

The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noice to the data.

NOTE: The version of Jitter in the book uses noise with a uniform distribution. Here I am using a normal distribution. The normal distribution does a better job of blurring artifacts, but the uniform distribution might be more true to the data.

def Jitter(values, jitter=0.5): n = len(values) return np.random.normal(0, jitter, n) + values

Heights were probably rounded off to the nearest inch, which is 2.8 cm, so I'll add random values from -1.4 to 1.4.

heights = Jitter(heights, 1.4) weights = Jitter(weights, 0.5)

And here's what the jittered data look like.

thinkplot.Scatter(heights, weights, alpha=1.0) thinkplot.Config(xlabel='Height (cm)', ylabel='Weight (kg)', axis=[140, 210, 20, 200], legend=False)
Image in a Jupyter notebook

The columns are gone, but now we have a different problem: saturation. Where there are many overlapping points, the plot is not as dark as it should be, which means that the outliers are darker than they should be, which gives the impression that the data are more scattered than they actually are.

This is a surprisingly common problem, even in papers published in peer-reviewed journals.

We can usually solve the saturation problem by adjusting alpha and the size of the markers, s.

thinkplot.Scatter(heights, weights, alpha=0.1, s=10) thinkplot.Config(xlabel='Height (cm)', ylabel='Weight (kg)', axis=[140, 210, 20, 200], legend=False)
Image in a Jupyter notebook

That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it.

An alternative to a scatter plot is something like a HexBin plot, which breaks the plane into bins, counts the number of respondents in each bin, and colors each bin in proportion to its count.

thinkplot.HexBin(heights, weights) thinkplot.Config(xlabel='Height (cm)', ylabel='Weight (kg)', axis=[140, 210, 20, 200], legend=False)
Image in a Jupyter notebook

In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.

Exercise: So far we have been working with a subset of only 5000 respondents. When we include the entire dataset, making an effective scatterplot can be tricky. As an exercise, experiment with Scatter and HexBin to make a plot that represents the entire dataset well.

# Solution goes here

Plotting percentiles

Sometimes a better way to get a sense of the relationship between variables is to divide the dataset into groups using one variable, and then plot percentiles of the other variable.

First I'll drop any rows that are missing height or weight.

cleaned = df.dropna(subset=['htm3', 'wtkg2'])

Then I'll divide the dataset into groups by height.

bins = np.arange(135, 210, 5) indices = np.digitize(cleaned.htm3, bins) groups = cleaned.groupby(indices)

Here are the number of respondents in each group:

for i, group in groups: print(i, len(group))
0 305 1 228 2 477 3 2162 4 18759 5 45761 6 70610 7 72138 8 61725 9 49938 10 43555 11 20077 12 7784 13 1777 14 405 15 131

Now we can compute the CDF of weight within each group.

mean_heights = [group.htm3.mean() for i, group in groups] cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]

And then extract the 25th, 50th, and 75th percentile from each group.

for percent in [75, 50, 25]: weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs] label = '%dth' % percent thinkplot.Plot(mean_heights, weight_percentiles, label=label) thinkplot.Config(xlabel='Height (cm)', ylabel='Weight (kg)', axis=[140, 210, 20, 200], legend=False)
Image in a Jupyter notebook

Exercise: Yet another option is to divide the dataset into groups and then plot the CDF for each group. As an exercise, divide the dataset into a smaller number of groups and plot the CDF for each group.

# Solution goes here

Correlation

The following function computes the covariance of two variables using NumPy's dot function.

def Cov(xs, ys, meanx=None, meany=None): xs = np.asarray(xs) ys = np.asarray(ys) if meanx is None: meanx = np.mean(xs) if meany is None: meany = np.mean(ys) cov = np.dot(xs-meanx, ys-meany) / len(xs) return cov

And here's an example:

heights, weights = cleaned.htm3, cleaned.wtkg2 Cov(heights, weights)
103.33290857697797

Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.

def Corr(xs, ys): xs = np.asarray(xs) ys = np.asarray(ys) meanx, varx = thinkstats2.MeanVar(xs) meany, vary = thinkstats2.MeanVar(ys) corr = Cov(xs, ys, meanx, meany) / np.sqrt(varx * vary) return corr

The correlation of height and weight is about 0.51, which is a moderately strong correlation.

Corr(heights, weights)
0.5087364789734771

NumPy provides a function that computes correlations, too:

np.corrcoef(heights, weights)
array([[1. , 0.50873648], [0.50873648, 1. ]])

The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).

Pearson's correlation is not robust in the presence of outliers, and it tends to underestimate the strength of non-linear relationships.

Spearman's correlation is more robust, and it can handle non-linear relationships as long as they are monotonic. Here's a function that computes Spearman's correlation:

import pandas as pd def SpearmanCorr(xs, ys): xranks = pd.Series(xs).rank() yranks = pd.Series(ys).rank() return Corr(xranks, yranks)

For heights and weights, Spearman's correlation is a little higher:

SpearmanCorr(heights, weights)
0.5405846262320476

A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.

def SpearmanCorr(xs, ys): xs = pd.Series(xs) ys = pd.Series(ys) return xs.corr(ys, method='spearman')

The result is the same as for the one we wrote.

SpearmanCorr(heights, weights)
0.5405846262320457

An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.

Corr(cleaned.htm3, np.log(cleaned.wtkg2))
0.5317282605983465

Exercises

Using data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables?

import first live, firsts, others = first.MakeFrames() live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here