Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download
Views: 16658
Kernel: SageMath (stable)

Math 152: Intro to Mathematical Software

2017-02-17

Kiran Kedlaya; University of California, San Diego

adapted from lectures by William Stein, University of Washington

** Lecture 17: Pandas (part 3): Statistics **

Today's topics:

  • statsmodels

  • scikit-learn

Here's a slideshow about the "data analysis and statistics in python", with comparisons to R. (You can also run R in a worksheet using %R, but I am not a statistician so I won't be able to offer much assistance with that.)

statsmodels

"Python module that allows users to explore data, estimate statistical models, and perform statistical tests."

Documentation: http://statsmodels.sourceforge.net/stable/

%auto %default_mode python %typeset_mode True import statsmodels.api as sm import pandas from patsy import dmatrices

1. The statsmodels "Getting started" tutorial!

We download the Guerry dataset, a collection of historical data used in support of Andre-Michel Guerry’s 1833 Essay on the Moral Statistics of France. The data set is hosted online in comma-separated values format (CSV) by the Rdatasets repository. We could download the file locally and then load it using read_csv, but pandas takes care of all of this automatically for us:

df = sm.datasets.get_rdataset("Guerry", "HistData").data # this is a familiar Pandas dataframe df.head()

We select the variables of interest and look at the bottom 5 rows:

df = df[ ['Department', 'Lottery', 'Literacy', 'Wealth', 'Region'] ] df.tail(5)

Notice that there is one missing observation in the Region column. We eliminate it using a DataFrame method provided by pandas:

df = df.dropna() df.tail(5)

Some statistics...

Substantive motivation and model: We want to know whether literacy rates in the 86 French departments are associated with per capita wagers on the Royal Lottery in the 1820s. We need to control for the level of wealth in each department, and we also want to include a series of dummy variables on the right-hand side of our regression equation to control for unobserved heterogeneity due to regional effects. The model is estimated using ordinary least squares regression (OLS).

(WARNING: I'm not a statistician...)

Use patsy‘s to create design matrices, then use statsmodels to do an ordinary least squares fit.

y,X = dmatrices('Lottery ~ Literacy + Wealth + Region', data=df, return_type='dataframe') mod = sm.OLS(y, X) # Describe model res = mod.fit() # Fit model res.summary() # Summarize model

statsmodels also provides graphics functions. For example, we can draw a plot of partial regression for a set of regressors by:

sm.graphics.plot_partregress?
sm.graphics.plot_partregress('Lottery', 'Wealth', ['Region', 'Literacy'], data=df, obs_labels=False)

2. Scikit Learn: Easy Machine Learning

From their website:

Machine Learning in Python

  • Simple and efficient tools for data mining and data analysis

  • Accessible to everybody, and reusable in various contexts

  • Built on NumPy, SciPy, and matplotlib

  • Open source, commercially usable - BSD license

from sklearn import datasets digits = datasets.load_digits()

This "digits" thing is 1797 hand-written low-resolution numbers from the postal code numbers on letters (zip codes)...

type(digits.data)
digits.data
digits.target
digits.images[200], digits.target[200]
len(digits.target)
import matplotlib.pyplot as plt def plot_data(n): print "scan of %s"%digits.target[n] show(plt.matshow(digits.data[n].reshape(8,8), interpolation="nearest", cmap=plt.cm.Greys_r))
digits.images[0]
digits.data[0]
plot_data(200)
plot_data(389) digits.target[389]

Example/goal here:

"In the case of the digits dataset, the task is to predict, given an image, which digit it represents. We are given 1797 samples of each of the 10 possible classes (the digits zero through nine) on which we fit an estimator to be able to predict the classes to which unseen samples belong."

from sklearn import svm # use a support vector machine... clf = svm.SVC(gamma=0.001, C=100.) # We set params manually;, but scikit learn has techniques for finding these params automatically
plot_data(1796)

This clf is a "classifier". Right now it doesn't know anything at all about our actual data.

We teach it by passing in our data to the fit method...

"We use all the images of our dataset apart from the last one. We select this training set with the [:-1] Python syntax, which produces a new array that contains all but the last entry of digits.data:"

# Train our classifier by giving it 1796 scanned digits! clf.fit(digits.data[:-2], digits.target[:-2])
plot_data(1796) # the last one -- not given
digits.data[-1:]
digits.target[-2:]
# it predicts it is an 8 correctly! clf.predict(digits.data[-2:])

** Exercise for you right now: ** Run the prediction on all 1797 scanned values to see how many are correct.

# To get you started, here is how to test the 5 scanned value. digits.target[5] clf.predict(digits.data[5:6])
n = 15 digits.target[n] == clf.predict(digits.data[n:n+1])[0]
# After you do this -- completely recreate clf but trained on way less data!
clf.fit(digits.data[:1000], digits.target[:1000]) pre = clf.predict(digits.data[1000:]) ans = digits.target[1000:]
len([i for i in range(797) if pre[i] == ans[i]])
plot_data(5)

bonus

plots of low dimensional embeddings of the digits dataset:

http://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html