Think Stats by Allen B. Downey Think Stats is an introduction to Probability and Statistics for Python programmers.
This is the accompanying code for this book.
License: GPL3
Examples and Exercises from Think Stats, 2nd Edition
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
I'll start with the data from the BRFSS again.
Here are the mean and standard deviation of female height in cm.
NormalPdf
returns a Pdf object that represents the normal distribution with the given parameters.
Density
returns a probability density, which doesn't mean much by itself.
thinkplot
provides Pdf
, which plots the probability density with a smooth curve.
Pdf
provides MakePmf
, which returns a Pmf
object that approximates the Pdf
.
If you have a Pmf
, you can also plot it using Pdf
, if you have reason to think it should be represented as a smooth curve.
Using a sample from the actual distribution, we can estimate the PDF using Kernel Density Estimation (KDE).
If you run this a few times, you'll see how much variation there is in the estimate.
Moments
Raw moments are just sums of powers.
The first raw moment is the mean. The other raw moments don't mean much.
The central moments are powers of distances from the mean.
The first central moment is approximately 0. The second central moment is the variance.
The standardized moments are ratios of central moments, with powers chosen to make the dimensions cancel.
The third standardized moment is skewness.
Normally a negative skewness indicates that the distribution has a longer tail on the left. In that case, the mean is usually less than the median.
But in this case the mean is greater than the median, which indicates skew to the right.
Because the skewness is based on the third moment, it is not robust; that is, it depends strongly on a few outliers. Pearson's median skewness is more robust.
Pearson's skewness is positive, indicating that the distribution of female heights is slightly skewed to the right.
Birth weights
Let's look at the distribution of birth weights again.
Based on KDE, it looks like the distribution is skewed to the left.
The mean is less than the median, which is consistent with left skew.
And both ways of computing skew are negative, which is consistent with left skew.
Adult weights
Now let's look at adult weights from the BRFSS. The distribution looks skewed to the right.
The mean is greater than the median, which is consistent with skew to the right.
And both ways of computing skewness are positive.
Exercises
The distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is. The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from http://www.census.gov/hhes/www/cpstables/032013/hhinc/toc.htm. I downloaded hinc06.xls
, which is an Excel spreadsheet with information about household income, and converted it to hinc06.csv
, a CSV file you will find in the repository for this book. You will also find hinc2.py
, which reads this file and transforms the data.
The dataset is in the form of a series of income ranges and the number of respondents who fell in each range. The lowest range includes respondents who reported annual household income “Under $5000.” The highest range includes respondents who made “$250,000 or more.”
To estimate mean and other statistics from these data, we have to make some assumptions about the lower and upper bounds, and how the values are distributed in each range. hinc2.py
provides InterpolateSample
, which shows one way to model this data. It takes a DataFrame
with a column, income
, that contains the upper bound of each range, and freq
, which contains the number of respondents in each frame.
It also takes log_upper
, which is an assumed upper bound on the highest range, expressed in log10
dollars. The default value, log_upper=6.0
represents the assumption that the largest income among the respondents is , or one million dollars.
InterpolateSample
generates a pseudo-sample; that is, a sample of household incomes that yields the same number of respondents in each range as the actual data. It assumes that incomes in each range are equally spaced on a log10
scale.
Compute the median, mean, skewness and Pearson’s skewness of the resulting sample. What fraction of households report a taxable income below the mean? How do the results depend on the assumed upper bound?
All of this is based on an assumption that the highest income is one million dollars, but that's certainly not correct. What happens to the skew if the upper bound is 10 million?
Without better information about the top of this distribution, we can't say much about the skewness of the distribution.